00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 869 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3534 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.059 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.156 Using shallow fetch with depth 1 00:00:00.156 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.156 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.117 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.129 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.141 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.141 > git config core.sparsecheckout # timeout=10 00:00:05.152 > git read-tree -mu HEAD # timeout=10 00:00:05.170 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.188 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.188 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.274 [Pipeline] Start of Pipeline 00:00:05.288 [Pipeline] library 00:00:05.289 Loading library shm_lib@master 00:00:05.290 Library shm_lib@master is cached. Copying from home. 00:00:05.308 [Pipeline] node 00:00:05.317 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.318 [Pipeline] { 00:00:05.326 [Pipeline] catchError 00:00:05.327 [Pipeline] { 00:00:05.339 [Pipeline] wrap 00:00:05.347 [Pipeline] { 00:00:05.354 [Pipeline] stage 00:00:05.356 [Pipeline] { (Prologue) 00:00:05.577 [Pipeline] sh 00:00:05.863 + logger -p user.info -t JENKINS-CI 00:00:05.883 [Pipeline] echo 00:00:05.885 Node: CYP12 00:00:05.893 [Pipeline] sh 00:00:06.203 [Pipeline] setCustomBuildProperty 00:00:06.213 [Pipeline] echo 00:00:06.214 Cleanup processes 00:00:06.218 [Pipeline] sh 00:00:06.501 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.501 2855762 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.513 [Pipeline] sh 00:00:06.801 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.801 ++ grep -v 'sudo pgrep' 00:00:06.801 ++ awk '{print $1}' 00:00:06.801 + sudo kill -9 00:00:06.801 + true 00:00:06.815 [Pipeline] cleanWs 00:00:06.824 [WS-CLEANUP] Deleting project workspace... 00:00:06.824 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.831 [WS-CLEANUP] done 00:00:06.835 [Pipeline] setCustomBuildProperty 00:00:06.848 [Pipeline] sh 00:00:07.135 + sudo git config --global --replace-all safe.directory '*' 00:00:07.200 [Pipeline] httpRequest 00:00:07.541 [Pipeline] echo 00:00:07.543 Sorcerer 10.211.164.101 is alive 00:00:07.553 [Pipeline] retry 00:00:07.555 [Pipeline] { 00:00:07.570 [Pipeline] httpRequest 00:00:07.574 HttpMethod: GET 00:00:07.575 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.575 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.590 Response Code: HTTP/1.1 200 OK 00:00:07.590 Success: Status code 200 is in the accepted range: 200,404 00:00:07.591 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.862 [Pipeline] } 00:00:10.882 [Pipeline] // retry 00:00:10.890 [Pipeline] sh 00:00:11.182 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.200 [Pipeline] httpRequest 00:00:11.605 [Pipeline] echo 00:00:11.607 Sorcerer 10.211.164.101 is alive 00:00:11.615 [Pipeline] retry 00:00:11.617 [Pipeline] { 00:00:11.632 [Pipeline] httpRequest 00:00:11.637 HttpMethod: GET 00:00:11.638 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:11.639 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:11.659 Response Code: HTTP/1.1 200 OK 00:00:11.659 Success: Status code 200 is in the accepted range: 200,404 00:00:11.660 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:42.790 [Pipeline] } 00:00:42.807 [Pipeline] // retry 00:00:42.814 [Pipeline] sh 00:00:43.107 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:45.667 [Pipeline] sh 00:00:45.956 + git -C spdk log --oneline -n5 00:00:45.956 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:45.956 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:45.956 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:45.956 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:45.956 9469ea403 nvme/fio_plugin: add trim support 00:00:45.976 [Pipeline] withCredentials 00:00:45.988 > git --version # timeout=10 00:00:46.003 > git --version # 'git version 2.39.2' 00:00:46.023 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:46.025 [Pipeline] { 00:00:46.035 [Pipeline] retry 00:00:46.037 [Pipeline] { 00:00:46.051 [Pipeline] sh 00:00:46.340 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:46.614 [Pipeline] } 00:00:46.633 [Pipeline] // retry 00:00:46.638 [Pipeline] } 00:00:46.654 [Pipeline] // withCredentials 00:00:46.662 [Pipeline] httpRequest 00:00:47.066 [Pipeline] echo 00:00:47.067 Sorcerer 10.211.164.101 is alive 00:00:47.076 [Pipeline] retry 00:00:47.078 [Pipeline] { 00:00:47.093 [Pipeline] httpRequest 00:00:47.098 HttpMethod: GET 00:00:47.098 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:47.099 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:47.108 Response Code: HTTP/1.1 200 OK 00:00:47.108 Success: Status code 200 is in the accepted range: 200,404 00:00:47.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.133 [Pipeline] } 00:01:15.149 [Pipeline] // retry 00:01:15.157 [Pipeline] sh 00:01:15.448 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.381 [Pipeline] sh 00:01:17.672 + git -C dpdk log --oneline -n5 00:01:17.672 eeb0605f11 version: 23.11.0 00:01:17.672 238778122a doc: update release notes for 23.11 00:01:17.672 46aa6b3cfc doc: fix description of RSS features 00:01:17.672 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:17.672 7e421ae345 devtools: support skipping forbid rule check 00:01:17.683 [Pipeline] } 00:01:17.697 [Pipeline] // stage 00:01:17.706 [Pipeline] stage 00:01:17.708 [Pipeline] { (Prepare) 00:01:17.730 [Pipeline] writeFile 00:01:17.747 [Pipeline] sh 00:01:18.037 + logger -p user.info -t JENKINS-CI 00:01:18.051 [Pipeline] sh 00:01:18.343 + logger -p user.info -t JENKINS-CI 00:01:18.357 [Pipeline] sh 00:01:18.649 + cat autorun-spdk.conf 00:01:18.649 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.649 SPDK_TEST_NVMF=1 00:01:18.649 SPDK_TEST_NVME_CLI=1 00:01:18.649 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.649 SPDK_TEST_NVMF_NICS=e810 00:01:18.649 SPDK_TEST_VFIOUSER=1 00:01:18.649 SPDK_RUN_UBSAN=1 00:01:18.649 NET_TYPE=phy 00:01:18.649 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:18.649 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.657 RUN_NIGHTLY=1 00:01:18.662 [Pipeline] readFile 00:01:18.680 [Pipeline] withEnv 00:01:18.682 [Pipeline] { 00:01:18.691 [Pipeline] sh 00:01:18.977 + set -ex 00:01:18.977 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:18.977 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.977 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.977 ++ SPDK_TEST_NVMF=1 00:01:18.977 ++ SPDK_TEST_NVME_CLI=1 00:01:18.977 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.977 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.977 ++ SPDK_TEST_VFIOUSER=1 00:01:18.977 ++ SPDK_RUN_UBSAN=1 00:01:18.977 ++ NET_TYPE=phy 00:01:18.977 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:18.977 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.977 ++ RUN_NIGHTLY=1 00:01:18.977 + case $SPDK_TEST_NVMF_NICS in 00:01:18.977 + DRIVERS=ice 00:01:18.977 + [[ tcp == \r\d\m\a ]] 00:01:18.977 + [[ -n ice ]] 00:01:18.977 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.977 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:18.977 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:18.977 rmmod: ERROR: Module irdma is not currently loaded 00:01:18.977 rmmod: ERROR: Module i40iw is not currently loaded 00:01:18.977 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:18.977 + true 00:01:18.977 + for D in $DRIVERS 00:01:18.977 + sudo modprobe ice 00:01:18.977 + exit 0 00:01:18.988 [Pipeline] } 00:01:18.999 [Pipeline] // withEnv 00:01:19.004 [Pipeline] } 00:01:19.016 [Pipeline] // stage 00:01:19.024 [Pipeline] catchError 00:01:19.026 [Pipeline] { 00:01:19.039 [Pipeline] timeout 00:01:19.039 Timeout set to expire in 1 hr 0 min 00:01:19.041 [Pipeline] { 00:01:19.055 [Pipeline] stage 00:01:19.057 [Pipeline] { (Tests) 00:01:19.071 [Pipeline] sh 00:01:19.362 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.363 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.363 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.363 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:19.363 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.363 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.363 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:19.363 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.363 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.363 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.363 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:19.363 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.363 + source /etc/os-release 00:01:19.363 ++ NAME='Fedora Linux' 00:01:19.363 ++ VERSION='39 (Cloud Edition)' 00:01:19.363 ++ ID=fedora 00:01:19.363 ++ VERSION_ID=39 00:01:19.363 ++ VERSION_CODENAME= 00:01:19.363 ++ PLATFORM_ID=platform:f39 00:01:19.363 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:19.363 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.363 ++ LOGO=fedora-logo-icon 00:01:19.363 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:19.363 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.363 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:19.363 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.363 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.363 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.363 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:19.363 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.363 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:19.363 ++ SUPPORT_END=2024-11-12 00:01:19.363 ++ VARIANT='Cloud Edition' 00:01:19.363 ++ VARIANT_ID=cloud 00:01:19.363 + uname -a 00:01:19.363 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:19.363 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:22.668 Hugepages 00:01:22.668 node hugesize free / total 00:01:22.668 node0 1048576kB 0 / 0 00:01:22.668 node0 2048kB 0 / 0 00:01:22.668 node1 1048576kB 0 / 0 00:01:22.668 node1 2048kB 0 / 0 00:01:22.668 00:01:22.668 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.668 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:22.668 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:22.668 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:22.668 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:22.668 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:22.668 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:22.668 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:22.668 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:22.668 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:22.668 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:22.669 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:22.669 + rm -f /tmp/spdk-ld-path 00:01:22.669 + source autorun-spdk.conf 00:01:22.669 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.669 ++ SPDK_TEST_NVMF=1 00:01:22.669 ++ SPDK_TEST_NVME_CLI=1 00:01:22.669 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.669 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.669 ++ SPDK_TEST_VFIOUSER=1 00:01:22.669 ++ SPDK_RUN_UBSAN=1 00:01:22.669 ++ NET_TYPE=phy 00:01:22.669 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.669 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.669 ++ RUN_NIGHTLY=1 00:01:22.669 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.669 + [[ -n '' ]] 00:01:22.669 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.669 + for M in /var/spdk/build-*-manifest.txt 00:01:22.669 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.669 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.669 + for M in /var/spdk/build-*-manifest.txt 00:01:22.669 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.669 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.669 + for M in /var/spdk/build-*-manifest.txt 00:01:22.669 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.669 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.669 ++ uname 00:01:22.669 + [[ Linux == \L\i\n\u\x ]] 00:01:22.669 + sudo dmesg -T 00:01:22.669 + sudo dmesg --clear 00:01:22.669 + dmesg_pid=2856783 00:01:22.669 + [[ Fedora Linux == FreeBSD ]] 00:01:22.669 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.669 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.669 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.669 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.669 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.669 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.669 + sudo dmesg -Tw 00:01:22.669 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.669 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.669 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.669 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.669 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.669 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.669 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.669 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.669 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.669 Test configuration: 00:01:22.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.669 SPDK_TEST_NVMF=1 00:01:22.669 SPDK_TEST_NVME_CLI=1 00:01:22.669 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.669 SPDK_TEST_NVMF_NICS=e810 00:01:22.669 SPDK_TEST_VFIOUSER=1 00:01:22.669 SPDK_RUN_UBSAN=1 00:01:22.669 NET_TYPE=phy 00:01:22.669 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.669 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.669 RUN_NIGHTLY=1 17:11:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:22.669 17:11:31 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.669 17:11:31 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.669 17:11:31 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.669 17:11:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.669 17:11:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.669 17:11:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.669 17:11:31 -- paths/export.sh@5 -- $ export PATH 00:01:22.669 17:11:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.669 17:11:31 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:22.669 17:11:31 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:22.669 17:11:31 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728832291.XXXXXX 00:01:22.930 17:11:31 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728832291.4umJIq 00:01:22.930 17:11:31 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:22.930 17:11:31 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:22.930 17:11:31 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.930 17:11:31 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:22.930 17:11:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.930 17:11:31 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.930 17:11:31 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:22.930 17:11:31 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:22.930 17:11:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.930 17:11:31 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:22.930 17:11:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.930 17:11:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.930 17:11:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.930 17:11:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.930 Sun Oct 13 03:11:31 PM UTC 2024 00:01:22.930 17:11:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.930 LTS-66-g726a04d70 00:01:22.930 17:11:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.930 17:11:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.930 17:11:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.930 17:11:31 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:22.930 17:11:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:22.930 17:11:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.930 ************************************ 00:01:22.930 START TEST ubsan 00:01:22.930 ************************************ 00:01:22.930 17:11:31 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:22.930 using ubsan 00:01:22.930 00:01:22.930 real 0m0.001s 00:01:22.930 user 0m0.000s 00:01:22.930 sys 0m0.000s 00:01:22.930 17:11:31 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:22.930 17:11:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.930 ************************************ 00:01:22.930 END TEST ubsan 00:01:22.930 ************************************ 00:01:22.930 17:11:31 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:22.930 17:11:31 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:22.930 17:11:31 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:22.930 17:11:31 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:22.930 17:11:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:22.930 17:11:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.930 ************************************ 00:01:22.930 START TEST build_native_dpdk 00:01:22.930 ************************************ 00:01:22.930 17:11:31 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:22.930 17:11:31 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:22.930 17:11:31 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:22.930 17:11:31 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:22.930 17:11:31 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:22.930 17:11:31 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:22.931 17:11:31 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:22.931 17:11:31 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:22.931 17:11:31 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:22.931 17:11:31 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:22.931 17:11:31 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:22.931 17:11:31 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:22.931 17:11:31 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:22.931 17:11:31 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.931 17:11:31 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.931 17:11:31 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:22.931 17:11:31 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.931 17:11:31 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:22.931 eeb0605f11 version: 23.11.0 00:01:22.931 238778122a doc: update release notes for 23.11 00:01:22.931 46aa6b3cfc doc: fix description of RSS features 00:01:22.931 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:22.931 7e421ae345 devtools: support skipping forbid rule check 00:01:22.931 17:11:31 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:22.931 17:11:31 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:22.931 17:11:31 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:22.931 17:11:31 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:22.931 17:11:31 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:22.931 17:11:31 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:22.931 17:11:31 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:22.931 17:11:31 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:22.931 17:11:31 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:22.931 17:11:31 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:22.931 17:11:31 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:22.931 17:11:31 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:22.931 17:11:31 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:22.931 17:11:31 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:22.931 17:11:31 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:22.931 17:11:31 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:22.931 17:11:31 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:22.931 17:11:31 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:22.931 17:11:31 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:22.931 17:11:31 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:22.931 17:11:31 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:22.931 17:11:31 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:22.931 17:11:31 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:22.931 17:11:31 -- scripts/common.sh@343 -- $ case "$op" in 00:01:22.931 17:11:31 -- scripts/common.sh@344 -- $ : 1 00:01:22.931 17:11:31 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:22.931 17:11:31 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.931 17:11:31 -- scripts/common.sh@364 -- $ decimal 23 00:01:22.931 17:11:31 -- scripts/common.sh@352 -- $ local d=23 00:01:22.931 17:11:31 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:22.931 17:11:31 -- scripts/common.sh@354 -- $ echo 23 00:01:22.931 17:11:31 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:22.931 17:11:31 -- scripts/common.sh@365 -- $ decimal 21 00:01:22.931 17:11:31 -- scripts/common.sh@352 -- $ local d=21 00:01:22.931 17:11:31 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:22.931 17:11:31 -- scripts/common.sh@354 -- $ echo 21 00:01:22.931 17:11:31 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:22.931 17:11:31 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:22.931 17:11:31 -- scripts/common.sh@366 -- $ return 1 00:01:22.931 17:11:31 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:22.931 patching file config/rte_config.h 00:01:22.931 Hunk #1 succeeded at 60 (offset 1 line). 00:01:22.931 17:11:31 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:22.931 17:11:31 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:22.931 17:11:31 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:22.931 17:11:31 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:22.931 17:11:31 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:22.931 17:11:31 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:22.931 17:11:31 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:22.931 17:11:31 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:22.931 17:11:31 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:22.931 17:11:31 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:22.931 17:11:31 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:22.931 17:11:31 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:22.931 17:11:31 -- scripts/common.sh@343 -- $ case "$op" in 00:01:22.931 17:11:31 -- scripts/common.sh@344 -- $ : 1 00:01:22.931 17:11:31 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:22.931 17:11:31 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.931 17:11:31 -- scripts/common.sh@364 -- $ decimal 23 00:01:22.931 17:11:31 -- scripts/common.sh@352 -- $ local d=23 00:01:22.931 17:11:31 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:22.931 17:11:31 -- scripts/common.sh@354 -- $ echo 23 00:01:22.931 17:11:31 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:22.931 17:11:31 -- scripts/common.sh@365 -- $ decimal 24 00:01:22.931 17:11:31 -- scripts/common.sh@352 -- $ local d=24 00:01:22.931 17:11:31 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:22.931 17:11:31 -- scripts/common.sh@354 -- $ echo 24 00:01:22.931 17:11:31 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:22.931 17:11:31 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:22.931 17:11:31 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:22.931 17:11:31 -- scripts/common.sh@367 -- $ return 0 00:01:22.931 17:11:31 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:22.931 patching file lib/pcapng/rte_pcapng.c 00:01:22.931 17:11:31 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:22.931 17:11:31 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:22.931 17:11:31 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:22.931 17:11:31 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:22.931 17:11:31 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.226 The Meson build system 00:01:28.226 Version: 1.5.0 00:01:28.226 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.226 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:28.226 Build type: native build 00:01:28.226 Program cat found: YES (/usr/bin/cat) 00:01:28.226 Project name: DPDK 00:01:28.226 Project version: 23.11.0 00:01:28.226 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:28.226 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:28.226 Host machine cpu family: x86_64 00:01:28.226 Host machine cpu: x86_64 00:01:28.226 Message: ## Building in Developer Mode ## 00:01:28.226 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.226 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:28.226 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.226 Program python3 found: YES (/usr/bin/python3) 00:01:28.226 Program cat found: YES (/usr/bin/cat) 00:01:28.226 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:28.226 Compiler for C supports arguments -march=native: YES 00:01:28.226 Checking for size of "void *" : 8 00:01:28.226 Checking for size of "void *" : 8 (cached) 00:01:28.226 Library m found: YES 00:01:28.226 Library numa found: YES 00:01:28.226 Has header "numaif.h" : YES 00:01:28.226 Library fdt found: NO 00:01:28.226 Library execinfo found: NO 00:01:28.226 Has header "execinfo.h" : YES 00:01:28.226 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:28.226 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.226 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.226 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.226 Run-time dependency openssl found: YES 3.1.1 00:01:28.226 Run-time dependency libpcap found: YES 1.10.4 00:01:28.226 Has header "pcap.h" with dependency libpcap: YES 00:01:28.226 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.226 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.226 Compiler for C supports arguments -Wformat: YES 00:01:28.226 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.226 Compiler for C supports arguments -Wformat-security: NO 00:01:28.226 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.226 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.226 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.226 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.226 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.226 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.226 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.226 Compiler for C supports arguments -Wundef: YES 00:01:28.226 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.226 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.226 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.226 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.226 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.226 Program objdump found: YES (/usr/bin/objdump) 00:01:28.226 Compiler for C supports arguments -mavx512f: YES 00:01:28.226 Checking if "AVX512 checking" compiles: YES 00:01:28.226 Fetching value of define "__SSE4_2__" : 1 00:01:28.226 Fetching value of define "__AES__" : 1 00:01:28.226 Fetching value of define "__AVX__" : 1 00:01:28.226 Fetching value of define "__AVX2__" : 1 00:01:28.226 Fetching value of define "__AVX512BW__" : 1 00:01:28.226 Fetching value of define "__AVX512CD__" : 1 00:01:28.226 Fetching value of define "__AVX512DQ__" : 1 00:01:28.226 Fetching value of define "__AVX512F__" : 1 00:01:28.226 Fetching value of define "__AVX512VL__" : 1 00:01:28.226 Fetching value of define "__PCLMUL__" : 1 00:01:28.226 Fetching value of define "__RDRND__" : 1 00:01:28.226 Fetching value of define "__RDSEED__" : 1 00:01:28.226 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:28.226 Fetching value of define "__znver1__" : (undefined) 00:01:28.226 Fetching value of define "__znver2__" : (undefined) 00:01:28.226 Fetching value of define "__znver3__" : (undefined) 00:01:28.226 Fetching value of define "__znver4__" : (undefined) 00:01:28.226 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.226 Message: lib/log: Defining dependency "log" 00:01:28.226 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.226 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.226 Checking for function "getentropy" : NO 00:01:28.226 Message: lib/eal: Defining dependency "eal" 00:01:28.226 Message: lib/ring: Defining dependency "ring" 00:01:28.226 Message: lib/rcu: Defining dependency "rcu" 00:01:28.226 Message: lib/mempool: Defining dependency "mempool" 00:01:28.226 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.226 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.226 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:28.227 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:28.227 Compiler for C supports arguments -mpclmul: YES 00:01:28.227 Compiler for C supports arguments -maes: YES 00:01:28.227 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.227 Compiler for C supports arguments -mavx512bw: YES 00:01:28.227 Compiler for C supports arguments -mavx512dq: YES 00:01:28.227 Compiler for C supports arguments -mavx512vl: YES 00:01:28.227 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.227 Compiler for C supports arguments -mavx2: YES 00:01:28.227 Compiler for C supports arguments -mavx: YES 00:01:28.227 Message: lib/net: Defining dependency "net" 00:01:28.227 Message: lib/meter: Defining dependency "meter" 00:01:28.227 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.227 Message: lib/pci: Defining dependency "pci" 00:01:28.227 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.227 Message: lib/metrics: Defining dependency "metrics" 00:01:28.227 Message: lib/hash: Defining dependency "hash" 00:01:28.227 Message: lib/timer: Defining dependency "timer" 00:01:28.227 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.227 Message: lib/acl: Defining dependency "acl" 00:01:28.227 Message: lib/bbdev: Defining dependency "bbdev" 00:01:28.227 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:28.227 Run-time dependency libelf found: YES 0.191 00:01:28.227 Message: lib/bpf: Defining dependency "bpf" 00:01:28.227 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:28.227 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.227 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.227 Message: lib/distributor: Defining dependency "distributor" 00:01:28.227 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.227 Message: lib/efd: Defining dependency "efd" 00:01:28.227 Message: lib/eventdev: Defining dependency "eventdev" 00:01:28.227 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:28.227 Message: lib/gpudev: Defining dependency "gpudev" 00:01:28.227 Message: lib/gro: Defining dependency "gro" 00:01:28.227 Message: lib/gso: Defining dependency "gso" 00:01:28.227 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:28.227 Message: lib/jobstats: Defining dependency "jobstats" 00:01:28.227 Message: lib/latencystats: Defining dependency "latencystats" 00:01:28.227 Message: lib/lpm: Defining dependency "lpm" 00:01:28.227 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512IFMA__" : 1 00:01:28.227 Message: lib/member: Defining dependency "member" 00:01:28.227 Message: lib/pcapng: Defining dependency "pcapng" 00:01:28.227 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.227 Message: lib/power: Defining dependency "power" 00:01:28.227 Message: lib/rawdev: Defining dependency "rawdev" 00:01:28.227 Message: lib/regexdev: Defining dependency "regexdev" 00:01:28.227 Message: lib/mldev: Defining dependency "mldev" 00:01:28.227 Message: lib/rib: Defining dependency "rib" 00:01:28.227 Message: lib/reorder: Defining dependency "reorder" 00:01:28.227 Message: lib/sched: Defining dependency "sched" 00:01:28.227 Message: lib/security: Defining dependency "security" 00:01:28.227 Message: lib/stack: Defining dependency "stack" 00:01:28.227 Has header "linux/userfaultfd.h" : YES 00:01:28.227 Has header "linux/vduse.h" : YES 00:01:28.227 Message: lib/vhost: Defining dependency "vhost" 00:01:28.227 Message: lib/ipsec: Defining dependency "ipsec" 00:01:28.227 Message: lib/pdcp: Defining dependency "pdcp" 00:01:28.227 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.227 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.227 Message: lib/fib: Defining dependency "fib" 00:01:28.227 Message: lib/port: Defining dependency "port" 00:01:28.227 Message: lib/pdump: Defining dependency "pdump" 00:01:28.227 Message: lib/table: Defining dependency "table" 00:01:28.227 Message: lib/pipeline: Defining dependency "pipeline" 00:01:28.227 Message: lib/graph: Defining dependency "graph" 00:01:28.227 Message: lib/node: Defining dependency "node" 00:01:28.227 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:28.227 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:28.227 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.620 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.620 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.620 Compiler for C supports arguments -Wno-unused-value: YES 00:01:29.620 Compiler for C supports arguments -Wno-format: YES 00:01:29.620 Compiler for C supports arguments -Wno-format-security: YES 00:01:29.620 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:29.620 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:29.620 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:29.620 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:29.620 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.620 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.620 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.620 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.620 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:29.620 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:29.620 Has header "sys/epoll.h" : YES 00:01:29.620 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:29.620 Configuring doxy-api-html.conf using configuration 00:01:29.620 Configuring doxy-api-man.conf using configuration 00:01:29.620 Program mandb found: YES (/usr/bin/mandb) 00:01:29.620 Program sphinx-build found: NO 00:01:29.620 Configuring rte_build_config.h using configuration 00:01:29.620 Message: 00:01:29.620 ================= 00:01:29.620 Applications Enabled 00:01:29.620 ================= 00:01:29.620 00:01:29.620 apps: 00:01:29.620 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:29.620 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:29.620 test-pmd, test-regex, test-sad, test-security-perf, 00:01:29.620 00:01:29.620 Message: 00:01:29.620 ================= 00:01:29.620 Libraries Enabled 00:01:29.620 ================= 00:01:29.620 00:01:29.620 libs: 00:01:29.620 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.620 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:29.620 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:29.620 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:29.620 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:29.620 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:29.620 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:29.620 00:01:29.620 00:01:29.620 Message: 00:01:29.620 =============== 00:01:29.620 Drivers Enabled 00:01:29.620 =============== 00:01:29.620 00:01:29.620 common: 00:01:29.620 00:01:29.620 bus: 00:01:29.620 pci, vdev, 00:01:29.620 mempool: 00:01:29.620 ring, 00:01:29.620 dma: 00:01:29.620 00:01:29.620 net: 00:01:29.620 i40e, 00:01:29.620 raw: 00:01:29.620 00:01:29.620 crypto: 00:01:29.620 00:01:29.620 compress: 00:01:29.620 00:01:29.620 regex: 00:01:29.620 00:01:29.620 ml: 00:01:29.620 00:01:29.620 vdpa: 00:01:29.620 00:01:29.620 event: 00:01:29.620 00:01:29.620 baseband: 00:01:29.620 00:01:29.620 gpu: 00:01:29.620 00:01:29.620 00:01:29.620 Message: 00:01:29.620 ================= 00:01:29.620 Content Skipped 00:01:29.620 ================= 00:01:29.620 00:01:29.620 apps: 00:01:29.620 00:01:29.620 libs: 00:01:29.620 00:01:29.620 drivers: 00:01:29.620 common/cpt: not in enabled drivers build config 00:01:29.620 common/dpaax: not in enabled drivers build config 00:01:29.620 common/iavf: not in enabled drivers build config 00:01:29.620 common/idpf: not in enabled drivers build config 00:01:29.620 common/mvep: not in enabled drivers build config 00:01:29.620 common/octeontx: not in enabled drivers build config 00:01:29.620 bus/auxiliary: not in enabled drivers build config 00:01:29.620 bus/cdx: not in enabled drivers build config 00:01:29.620 bus/dpaa: not in enabled drivers build config 00:01:29.620 bus/fslmc: not in enabled drivers build config 00:01:29.620 bus/ifpga: not in enabled drivers build config 00:01:29.620 bus/platform: not in enabled drivers build config 00:01:29.620 bus/vmbus: not in enabled drivers build config 00:01:29.620 common/cnxk: not in enabled drivers build config 00:01:29.621 common/mlx5: not in enabled drivers build config 00:01:29.621 common/nfp: not in enabled drivers build config 00:01:29.621 common/qat: not in enabled drivers build config 00:01:29.621 common/sfc_efx: not in enabled drivers build config 00:01:29.621 mempool/bucket: not in enabled drivers build config 00:01:29.621 mempool/cnxk: not in enabled drivers build config 00:01:29.621 mempool/dpaa: not in enabled drivers build config 00:01:29.621 mempool/dpaa2: not in enabled drivers build config 00:01:29.621 mempool/octeontx: not in enabled drivers build config 00:01:29.621 mempool/stack: not in enabled drivers build config 00:01:29.621 dma/cnxk: not in enabled drivers build config 00:01:29.621 dma/dpaa: not in enabled drivers build config 00:01:29.621 dma/dpaa2: not in enabled drivers build config 00:01:29.621 dma/hisilicon: not in enabled drivers build config 00:01:29.621 dma/idxd: not in enabled drivers build config 00:01:29.621 dma/ioat: not in enabled drivers build config 00:01:29.621 dma/skeleton: not in enabled drivers build config 00:01:29.621 net/af_packet: not in enabled drivers build config 00:01:29.621 net/af_xdp: not in enabled drivers build config 00:01:29.621 net/ark: not in enabled drivers build config 00:01:29.621 net/atlantic: not in enabled drivers build config 00:01:29.621 net/avp: not in enabled drivers build config 00:01:29.621 net/axgbe: not in enabled drivers build config 00:01:29.621 net/bnx2x: not in enabled drivers build config 00:01:29.621 net/bnxt: not in enabled drivers build config 00:01:29.621 net/bonding: not in enabled drivers build config 00:01:29.621 net/cnxk: not in enabled drivers build config 00:01:29.621 net/cpfl: not in enabled drivers build config 00:01:29.621 net/cxgbe: not in enabled drivers build config 00:01:29.621 net/dpaa: not in enabled drivers build config 00:01:29.621 net/dpaa2: not in enabled drivers build config 00:01:29.621 net/e1000: not in enabled drivers build config 00:01:29.621 net/ena: not in enabled drivers build config 00:01:29.621 net/enetc: not in enabled drivers build config 00:01:29.621 net/enetfec: not in enabled drivers build config 00:01:29.621 net/enic: not in enabled drivers build config 00:01:29.621 net/failsafe: not in enabled drivers build config 00:01:29.621 net/fm10k: not in enabled drivers build config 00:01:29.621 net/gve: not in enabled drivers build config 00:01:29.621 net/hinic: not in enabled drivers build config 00:01:29.621 net/hns3: not in enabled drivers build config 00:01:29.621 net/iavf: not in enabled drivers build config 00:01:29.621 net/ice: not in enabled drivers build config 00:01:29.621 net/idpf: not in enabled drivers build config 00:01:29.621 net/igc: not in enabled drivers build config 00:01:29.621 net/ionic: not in enabled drivers build config 00:01:29.621 net/ipn3ke: not in enabled drivers build config 00:01:29.621 net/ixgbe: not in enabled drivers build config 00:01:29.621 net/mana: not in enabled drivers build config 00:01:29.621 net/memif: not in enabled drivers build config 00:01:29.621 net/mlx4: not in enabled drivers build config 00:01:29.621 net/mlx5: not in enabled drivers build config 00:01:29.621 net/mvneta: not in enabled drivers build config 00:01:29.621 net/mvpp2: not in enabled drivers build config 00:01:29.621 net/netvsc: not in enabled drivers build config 00:01:29.621 net/nfb: not in enabled drivers build config 00:01:29.621 net/nfp: not in enabled drivers build config 00:01:29.621 net/ngbe: not in enabled drivers build config 00:01:29.621 net/null: not in enabled drivers build config 00:01:29.621 net/octeontx: not in enabled drivers build config 00:01:29.621 net/octeon_ep: not in enabled drivers build config 00:01:29.621 net/pcap: not in enabled drivers build config 00:01:29.621 net/pfe: not in enabled drivers build config 00:01:29.621 net/qede: not in enabled drivers build config 00:01:29.621 net/ring: not in enabled drivers build config 00:01:29.621 net/sfc: not in enabled drivers build config 00:01:29.621 net/softnic: not in enabled drivers build config 00:01:29.621 net/tap: not in enabled drivers build config 00:01:29.621 net/thunderx: not in enabled drivers build config 00:01:29.621 net/txgbe: not in enabled drivers build config 00:01:29.621 net/vdev_netvsc: not in enabled drivers build config 00:01:29.621 net/vhost: not in enabled drivers build config 00:01:29.621 net/virtio: not in enabled drivers build config 00:01:29.621 net/vmxnet3: not in enabled drivers build config 00:01:29.621 raw/cnxk_bphy: not in enabled drivers build config 00:01:29.621 raw/cnxk_gpio: not in enabled drivers build config 00:01:29.621 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:29.621 raw/ifpga: not in enabled drivers build config 00:01:29.621 raw/ntb: not in enabled drivers build config 00:01:29.621 raw/skeleton: not in enabled drivers build config 00:01:29.621 crypto/armv8: not in enabled drivers build config 00:01:29.621 crypto/bcmfs: not in enabled drivers build config 00:01:29.621 crypto/caam_jr: not in enabled drivers build config 00:01:29.621 crypto/ccp: not in enabled drivers build config 00:01:29.621 crypto/cnxk: not in enabled drivers build config 00:01:29.621 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.621 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.621 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.621 crypto/mlx5: not in enabled drivers build config 00:01:29.621 crypto/mvsam: not in enabled drivers build config 00:01:29.621 crypto/nitrox: not in enabled drivers build config 00:01:29.621 crypto/null: not in enabled drivers build config 00:01:29.621 crypto/octeontx: not in enabled drivers build config 00:01:29.621 crypto/openssl: not in enabled drivers build config 00:01:29.621 crypto/scheduler: not in enabled drivers build config 00:01:29.621 crypto/uadk: not in enabled drivers build config 00:01:29.621 crypto/virtio: not in enabled drivers build config 00:01:29.621 compress/isal: not in enabled drivers build config 00:01:29.621 compress/mlx5: not in enabled drivers build config 00:01:29.621 compress/octeontx: not in enabled drivers build config 00:01:29.621 compress/zlib: not in enabled drivers build config 00:01:29.621 regex/mlx5: not in enabled drivers build config 00:01:29.621 regex/cn9k: not in enabled drivers build config 00:01:29.621 ml/cnxk: not in enabled drivers build config 00:01:29.622 vdpa/ifc: not in enabled drivers build config 00:01:29.622 vdpa/mlx5: not in enabled drivers build config 00:01:29.622 vdpa/nfp: not in enabled drivers build config 00:01:29.622 vdpa/sfc: not in enabled drivers build config 00:01:29.622 event/cnxk: not in enabled drivers build config 00:01:29.622 event/dlb2: not in enabled drivers build config 00:01:29.622 event/dpaa: not in enabled drivers build config 00:01:29.622 event/dpaa2: not in enabled drivers build config 00:01:29.622 event/dsw: not in enabled drivers build config 00:01:29.622 event/opdl: not in enabled drivers build config 00:01:29.622 event/skeleton: not in enabled drivers build config 00:01:29.622 event/sw: not in enabled drivers build config 00:01:29.622 event/octeontx: not in enabled drivers build config 00:01:29.622 baseband/acc: not in enabled drivers build config 00:01:29.622 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:29.622 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:29.622 baseband/la12xx: not in enabled drivers build config 00:01:29.622 baseband/null: not in enabled drivers build config 00:01:29.622 baseband/turbo_sw: not in enabled drivers build config 00:01:29.622 gpu/cuda: not in enabled drivers build config 00:01:29.622 00:01:29.622 00:01:29.622 Build targets in project: 215 00:01:29.622 00:01:29.622 DPDK 23.11.0 00:01:29.622 00:01:29.622 User defined options 00:01:29.622 libdir : lib 00:01:29.622 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.622 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:29.622 c_link_args : 00:01:29.622 enable_docs : false 00:01:29.622 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.622 enable_kmods : false 00:01:29.622 machine : native 00:01:29.622 tests : false 00:01:29.622 00:01:29.622 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.622 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:29.896 17:11:38 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:29.896 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:30.166 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:30.166 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.166 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.166 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.166 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.166 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.166 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.166 [8/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.166 [9/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.435 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.435 [11/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.435 [12/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.435 [13/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.435 [14/705] Linking static target lib/librte_kvargs.a 00:01:30.435 [15/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:30.435 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:30.435 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.435 [18/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.436 [19/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:30.436 [20/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.436 [21/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.436 [22/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:30.436 [23/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:30.436 [24/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.436 [25/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.436 [26/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:30.704 [27/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:30.704 [28/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.704 [29/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:30.704 [30/705] Linking static target lib/librte_pci.a 00:01:30.704 [31/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:30.704 [32/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.704 [33/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.704 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:30.704 [35/705] Linking static target lib/librte_log.a 00:01:30.969 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.969 [37/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:30.969 [38/705] Linking static target lib/librte_cfgfile.a 00:01:30.969 [39/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:30.969 [40/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.969 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.969 [42/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:30.969 [43/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.233 [44/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.233 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.233 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.233 [47/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.233 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.233 [49/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.233 [50/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.233 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.233 [52/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.233 [53/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.233 [54/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.233 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.233 [56/705] Linking static target lib/librte_meter.a 00:01:31.233 [57/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.233 [58/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.233 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.233 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.233 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.233 [62/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.233 [63/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.233 [64/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.233 [65/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.233 [66/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.233 [67/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.233 [68/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.233 [69/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.233 [70/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:31.233 [71/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.233 [72/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.233 [73/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.233 [74/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.233 [75/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.233 [76/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.233 [77/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.233 [78/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:31.233 [79/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.502 [80/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.502 [81/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:31.502 [82/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.502 [83/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.502 [84/705] Linking static target lib/librte_ring.a 00:01:31.502 [85/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.502 [86/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.502 [87/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.502 [88/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.502 [89/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.502 [90/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.502 [91/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.502 [92/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.502 [93/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:31.502 [94/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:31.502 [95/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.502 [96/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.502 [97/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.502 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.502 [99/705] Linking static target lib/librte_cmdline.a 00:01:31.502 [100/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:31.502 [101/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.502 [102/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:31.502 [103/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.502 [104/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:31.502 [105/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:31.502 [106/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:31.502 [107/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.502 [108/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.502 [109/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:31.502 [110/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.502 [111/705] Linking static target lib/librte_metrics.a 00:01:31.502 [112/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.502 [113/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.502 [114/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:31.502 [115/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.502 [116/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:31.502 [117/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.502 [118/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.502 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.502 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.502 [121/705] Linking static target lib/librte_bitratestats.a 00:01:31.502 [122/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.502 [123/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:31.761 [124/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.761 [125/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:31.761 [126/705] Linking static target lib/librte_net.a 00:01:31.761 [127/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:31.761 [128/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:31.761 [129/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:31.761 [130/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:31.761 [131/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.761 [132/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:31.761 [133/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.761 [134/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.761 [135/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.761 [136/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:31.761 [137/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.761 [138/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.761 [139/705] Linking static target lib/librte_timer.a 00:01:31.761 [140/705] Linking static target lib/librte_compressdev.a 00:01:31.761 [141/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.761 [142/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.761 [143/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.761 [144/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.761 [145/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.761 [146/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.761 [147/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.761 [148/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.761 [149/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:31.761 [150/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.761 [151/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.761 [152/705] Linking static target lib/librte_mempool.a 00:01:31.761 [153/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.761 [154/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:31.761 [155/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.761 [156/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.761 [157/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.761 [158/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:31.761 [159/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:31.761 [160/705] Linking target lib/librte_log.so.24.0 00:01:31.761 [161/705] Linking static target lib/librte_dispatcher.a 00:01:31.761 [162/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:31.761 [163/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.023 [164/705] Linking static target lib/librte_bbdev.a 00:01:32.023 [165/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:32.023 [166/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.023 [167/705] Linking static target lib/librte_gpudev.a 00:01:32.023 [168/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:32.023 [169/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:32.023 [170/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.023 [171/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.023 [172/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:32.023 [173/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:32.023 [174/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:32.023 [175/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:32.023 [176/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.023 [177/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:32.023 [178/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:32.023 [179/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.023 [180/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:32.023 [181/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:32.023 [182/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:32.023 [183/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:32.023 [184/705] Linking static target lib/librte_dmadev.a 00:01:32.023 [185/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:32.023 [186/705] Linking static target lib/librte_jobstats.a 00:01:32.023 [187/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.023 [188/705] Linking static target lib/librte_distributor.a 00:01:32.023 [189/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.023 [190/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:32.023 [191/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:32.023 [192/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:32.023 [193/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:32.023 [194/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:32.023 [195/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:32.023 [196/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:32.023 [197/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:32.023 [198/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:32.023 [199/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.023 [200/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:32.023 [201/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:32.023 [202/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:32.023 [203/705] Linking target lib/librte_kvargs.so.24.0 00:01:32.023 [204/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:32.023 [205/705] Linking static target lib/librte_gro.a 00:01:32.023 [206/705] Linking static target lib/librte_stack.a 00:01:32.023 [207/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.287 [208/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:32.287 [209/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.287 [210/705] Linking static target lib/librte_telemetry.a 00:01:32.287 [211/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.287 [212/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:32.287 [213/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:32.287 [214/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:32.287 [215/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.287 [216/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:32.287 [217/705] Linking static target lib/librte_gso.a 00:01:32.287 [218/705] Linking static target lib/librte_rcu.a 00:01:32.287 [219/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:32.287 [220/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.287 [221/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:32.287 [222/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.287 [223/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:32.287 [224/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.287 [225/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:32.287 [226/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:32.287 [227/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:32.287 [228/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:32.287 [229/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.287 [230/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:32.287 [231/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:32.287 [232/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.287 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:32.287 [234/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:32.287 [235/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:32.287 [236/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:32.287 [237/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:32.287 [238/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:32.287 [239/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.287 [240/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:32.287 [241/705] Linking static target lib/librte_latencystats.a 00:01:32.287 [242/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.287 [243/705] Linking static target lib/librte_reorder.a 00:01:32.287 [244/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.287 [245/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:32.287 [246/705] Linking static target lib/librte_rawdev.a 00:01:32.287 [247/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.287 [248/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.287 [249/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:32.287 [250/705] Linking static target lib/librte_mldev.a 00:01:32.287 [251/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.287 [252/705] Linking static target lib/librte_regexdev.a 00:01:32.287 [253/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:32.287 [254/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:32.287 [255/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:32.287 [256/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:32.287 [257/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.287 [258/705] Linking static target lib/librte_eal.a 00:01:32.548 [259/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [260/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.548 [261/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.548 [262/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:32.548 [263/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.548 [264/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:32.548 [265/705] Linking static target lib/librte_ip_frag.a 00:01:32.548 [266/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [267/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:32.548 [268/705] Linking static target lib/librte_security.a 00:01:32.548 [269/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.548 [270/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [271/705] Linking static target lib/librte_bpf.a 00:01:32.548 [272/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [273/705] Linking static target lib/librte_power.a 00:01:32.548 [274/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:32.548 [275/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.548 [276/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [277/705] Linking static target lib/librte_pcapng.a 00:01:32.548 [278/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.548 [279/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:32.548 [280/705] Linking static target lib/librte_mbuf.a 00:01:32.548 [281/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:32.548 [282/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:32.548 [283/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:32.548 [284/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.548 [285/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:32.548 [286/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [287/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [288/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.548 [289/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:32.548 [290/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:32.548 [291/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:32.548 [292/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:32.548 [293/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:32.548 [294/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:32.548 [295/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.808 [296/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:32.808 [297/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [298/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:32.809 [299/705] Linking static target lib/librte_efd.a 00:01:32.809 [300/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:32.809 [301/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:32.809 [302/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:32.809 [303/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:32.809 [304/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.809 [305/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:32.809 [306/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:32.809 [307/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:32.809 [308/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:32.809 [309/705] Linking static target lib/librte_rib.a 00:01:32.809 [310/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:32.809 [311/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:32.809 [312/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:32.809 [313/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:32.809 [314/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:32.809 [315/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:32.809 [316/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:32.809 [317/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:32.809 [318/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:32.809 [319/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:32.809 [320/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [321/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [322/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:32.809 [323/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [324/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:32.809 [325/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:32.809 [326/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:32.809 [327/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:32.809 [328/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:32.809 [329/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:32.809 [330/705] Linking static target lib/librte_lpm.a 00:01:32.809 [331/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [332/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:32.809 [333/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [334/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [335/705] Linking target lib/librte_telemetry.so.24.0 00:01:32.809 [336/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:32.809 [337/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:32.809 [338/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:32.809 [339/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:32.809 [340/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.809 [341/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:32.809 [342/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:32.809 [343/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:32.809 [344/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:32.809 [345/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:32.809 [346/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:33.071 [347/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:33.071 [348/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.071 [349/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:33.071 [350/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:33.071 [351/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:33.071 [352/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:33.071 [353/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:33.071 [354/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:33.071 [355/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:33.071 [356/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:33.071 [357/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:33.071 [358/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:33.071 [359/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:33.071 [360/705] Linking static target lib/librte_fib.a 00:01:33.071 [361/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:33.071 [362/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.071 [363/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:33.071 [364/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:33.071 [365/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:33.071 [366/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:33.071 [367/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.071 [368/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:33.071 [369/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.071 [370/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:33.071 [371/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:33.071 [372/705] Linking static target lib/librte_graph.a 00:01:33.071 [373/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:33.071 [374/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:33.071 [375/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.071 [376/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:33.071 [377/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:33.071 [378/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:33.331 [379/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:33.331 [380/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:33.331 [381/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:33.331 [382/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:33.331 [383/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:33.331 [384/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:33.331 [385/705] Linking static target lib/librte_pdump.a 00:01:33.331 [386/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:33.331 [387/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:33.331 [388/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:33.331 [389/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:33.331 [390/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:33.331 [391/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:33.331 [392/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:33.331 [393/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:33.331 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:33.331 [395/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:33.331 [396/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.331 [397/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:33.331 [398/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:33.331 [399/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:33.331 [400/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:33.331 [401/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:33.331 [402/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:33.331 [403/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.331 [404/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.331 [405/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.331 [406/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.331 [407/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:33.331 [408/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:33.331 [409/705] Linking static target drivers/librte_bus_vdev.a 00:01:33.331 [410/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:33.591 [411/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:33.591 [412/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.591 [413/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.591 [414/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:33.591 [415/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:33.591 [416/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:33.591 [417/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:33.591 [418/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.591 [419/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:33.591 [420/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:33.591 [421/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:33.591 [422/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:33.591 [423/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:33.591 [424/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:33.591 [425/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:33.591 [426/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:33.591 [427/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.591 [428/705] Linking static target lib/librte_table.a 00:01:33.591 [429/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:33.591 [430/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.591 [431/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:33.591 [432/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:33.591 [433/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:33.591 [434/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.591 [435/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:33.591 [436/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:33.591 [437/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:33.591 [438/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:33.591 [439/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:33.591 [440/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.591 [441/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.591 [442/705] Linking static target drivers/librte_bus_pci.a 00:01:33.591 [443/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:33.591 [444/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:33.591 [445/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:33.591 [446/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:33.591 [447/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:33.591 [448/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:33.591 [449/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:33.591 [450/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:33.591 [451/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:33.591 [452/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:33.591 [453/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:33.591 [454/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:33.852 [455/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.852 [456/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:33.852 [457/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:33.852 [458/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:33.852 [459/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:33.852 [460/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:33.852 [461/705] Linking static target lib/librte_sched.a 00:01:33.852 [462/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:33.852 [463/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:33.852 [464/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:33.852 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:33.852 [466/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:33.852 [467/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:33.852 [468/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.852 [469/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:33.852 [470/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:33.852 [471/705] Linking static target lib/librte_cryptodev.a 00:01:33.852 [472/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:33.852 [473/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:33.852 [474/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:33.852 [475/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:33.852 [476/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:33.852 [477/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:33.852 [478/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:33.852 [479/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:33.852 [480/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:33.852 [481/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:33.852 [482/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:33.852 [483/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:33.852 [484/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:33.852 [485/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:33.852 [486/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.852 [487/705] Linking static target lib/librte_node.a 00:01:33.852 [488/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:33.852 [489/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.852 [490/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:33.852 [491/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:33.852 [492/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.852 [493/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.852 [494/705] Linking static target drivers/librte_mempool_ring.a 00:01:33.852 [495/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:33.852 [496/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:33.852 [497/705] Linking static target lib/librte_ipsec.a 00:01:33.852 [498/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:33.852 [499/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:33.852 [500/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:33.852 [501/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:34.114 [502/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:34.114 [503/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:34.114 [504/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:34.114 [505/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:34.114 [506/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:34.114 [507/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:34.114 [508/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:34.114 [509/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:34.114 [510/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:34.114 [511/705] Linking static target lib/librte_member.a 00:01:34.114 [512/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:34.114 [513/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:34.114 [514/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:34.114 [515/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:34.114 [516/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:34.114 [517/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:34.114 [518/705] Linking static target lib/librte_pdcp.a 00:01:34.114 [519/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:34.114 [520/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:34.114 [521/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:34.114 [522/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:34.114 [523/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:34.114 [524/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:34.114 [525/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:34.114 [526/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:34.114 [527/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:34.114 [528/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.114 [529/705] Linking static target lib/librte_port.a 00:01:34.114 [530/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:34.114 [531/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:34.114 [532/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:34.114 [533/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:34.114 [534/705] Linking static target lib/acl/libavx2_tmp.a 00:01:34.114 [535/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:34.114 [536/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:34.114 [537/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:34.114 [538/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:34.114 [539/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:34.114 [540/705] Linking static target lib/librte_acl.a 00:01:34.377 [541/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:34.377 [542/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.377 [543/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:34.377 [544/705] Linking static target lib/librte_hash.a 00:01:34.377 [545/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.377 [546/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:34.377 [547/705] Linking static target lib/librte_eventdev.a 00:01:34.377 [548/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.377 [549/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:34.377 [550/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:34.377 [551/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:34.377 [552/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:34.377 [553/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:34.377 [554/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:34.377 [555/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.377 [556/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.377 [557/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:34.377 [558/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:34.638 [559/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.638 [560/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:34.638 [561/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:34.638 [562/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:34.638 [563/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.638 [564/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:34.638 [565/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:34.638 [566/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.638 [567/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:34.901 [568/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:35.162 [569/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.162 [570/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:35.162 [571/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.162 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:35.423 [573/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:35.423 [574/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:35.423 [575/705] Linking static target lib/librte_ethdev.a 00:01:35.423 [576/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:35.684 [577/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:35.944 [578/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.204 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:36.204 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:36.204 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:36.464 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:36.464 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:36.464 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:36.464 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:36.724 [586/705] Linking static target drivers/librte_net_i40e.a 00:01:37.294 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:37.555 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:37.815 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.075 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.280 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:42.280 [592/705] Linking static target lib/librte_pipeline.a 00:01:43.228 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.228 [594/705] Linking static target lib/librte_vhost.a 00:01:43.550 [595/705] Linking target app/dpdk-graph 00:01:43.550 [596/705] Linking target app/dpdk-test-acl 00:01:43.550 [597/705] Linking target app/dpdk-test-gpudev 00:01:43.550 [598/705] Linking target app/dpdk-dumpcap 00:01:43.550 [599/705] Linking target app/dpdk-pdump 00:01:43.550 [600/705] Linking target app/dpdk-test-compress-perf 00:01:43.550 [601/705] Linking target app/dpdk-test-dma-perf 00:01:43.550 [602/705] Linking target app/dpdk-test-sad 00:01:43.550 [603/705] Linking target app/dpdk-test-crypto-perf 00:01:43.550 [604/705] Linking target app/dpdk-test-pipeline 00:01:43.550 [605/705] Linking target app/dpdk-test-eventdev 00:01:43.550 [606/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.550 [607/705] Linking target app/dpdk-test-cmdline 00:01:43.550 [608/705] Linking target app/dpdk-proc-info 00:01:43.550 [609/705] Linking target app/dpdk-test-fib 00:01:43.550 [610/705] Linking target app/dpdk-test-flow-perf 00:01:43.550 [611/705] Linking target app/dpdk-test-bbdev 00:01:43.550 [612/705] Linking target app/dpdk-test-regex 00:01:43.550 [613/705] Linking target app/dpdk-test-mldev 00:01:43.550 [614/705] Linking target app/dpdk-test-security-perf 00:01:43.550 [615/705] Linking target app/dpdk-testpmd 00:01:43.550 [616/705] Linking target lib/librte_eal.so.24.0 00:01:43.868 [617/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:43.868 [618/705] Linking target lib/librte_ring.so.24.0 00:01:43.868 [619/705] Linking target lib/librte_timer.so.24.0 00:01:43.868 [620/705] Linking target lib/librte_meter.so.24.0 00:01:43.868 [621/705] Linking target lib/librte_pci.so.24.0 00:01:43.868 [622/705] Linking target lib/librte_dmadev.so.24.0 00:01:43.868 [623/705] Linking target lib/librte_cfgfile.so.24.0 00:01:43.868 [624/705] Linking target lib/librte_jobstats.so.24.0 00:01:43.868 [625/705] Linking target lib/librte_rawdev.so.24.0 00:01:43.868 [626/705] Linking target lib/librte_stack.so.24.0 00:01:43.868 [627/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:43.868 [628/705] Linking target lib/librte_acl.so.24.0 00:01:43.868 [629/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.868 [630/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:43.868 [631/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:43.868 [632/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:43.868 [633/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:43.868 [634/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:43.868 [635/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:44.146 [636/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:44.146 [637/705] Linking target lib/librte_rcu.so.24.0 00:01:44.146 [638/705] Linking target lib/librte_mempool.so.24.0 00:01:44.146 [639/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:44.146 [640/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:44.146 [641/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:44.146 [642/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:44.146 [643/705] Linking target lib/librte_mbuf.so.24.0 00:01:44.146 [644/705] Linking target lib/librte_rib.so.24.0 00:01:44.146 [645/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:44.408 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:44.408 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:44.408 [648/705] Linking target lib/librte_compressdev.so.24.0 00:01:44.408 [649/705] Linking target lib/librte_net.so.24.0 00:01:44.408 [650/705] Linking target lib/librte_bbdev.so.24.0 00:01:44.408 [651/705] Linking target lib/librte_distributor.so.24.0 00:01:44.408 [652/705] Linking target lib/librte_mldev.so.24.0 00:01:44.408 [653/705] Linking target lib/librte_reorder.so.24.0 00:01:44.408 [654/705] Linking target lib/librte_regexdev.so.24.0 00:01:44.408 [655/705] Linking target lib/librte_gpudev.so.24.0 00:01:44.408 [656/705] Linking target lib/librte_cryptodev.so.24.0 00:01:44.408 [657/705] Linking target lib/librte_sched.so.24.0 00:01:44.408 [658/705] Linking target lib/librte_fib.so.24.0 00:01:44.408 [659/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:44.408 [660/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:44.669 [661/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:44.669 [662/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:44.669 [663/705] Linking target lib/librte_hash.so.24.0 00:01:44.669 [664/705] Linking target lib/librte_cmdline.so.24.0 00:01:44.669 [665/705] Linking target lib/librte_security.so.24.0 00:01:44.669 [666/705] Linking target lib/librte_ethdev.so.24.0 00:01:44.669 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:44.669 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:44.669 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:44.669 [670/705] Linking target lib/librte_lpm.so.24.0 00:01:44.669 [671/705] Linking target lib/librte_efd.so.24.0 00:01:44.669 [672/705] Linking target lib/librte_member.so.24.0 00:01:44.669 [673/705] Linking target lib/librte_ipsec.so.24.0 00:01:44.669 [674/705] Linking target lib/librte_pdcp.so.24.0 00:01:44.930 [675/705] Linking target lib/librte_metrics.so.24.0 00:01:44.930 [676/705] Linking target lib/librte_gso.so.24.0 00:01:44.930 [677/705] Linking target lib/librte_ip_frag.so.24.0 00:01:44.930 [678/705] Linking target lib/librte_pcapng.so.24.0 00:01:44.930 [679/705] Linking target lib/librte_bpf.so.24.0 00:01:44.930 [680/705] Linking target lib/librte_gro.so.24.0 00:01:44.930 [681/705] Linking target lib/librte_power.so.24.0 00:01:44.930 [682/705] Linking target lib/librte_eventdev.so.24.0 00:01:44.930 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:44.930 [684/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:44.930 [685/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:44.930 [686/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:44.930 [687/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:44.930 [688/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:44.930 [689/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:44.930 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:44.930 [691/705] Linking target lib/librte_bitratestats.so.24.0 00:01:44.930 [692/705] Linking target lib/librte_latencystats.so.24.0 00:01:44.931 [693/705] Linking target lib/librte_graph.so.24.0 00:01:44.931 [694/705] Linking target lib/librte_pdump.so.24.0 00:01:44.931 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:01:44.931 [696/705] Linking target lib/librte_port.so.24.0 00:01:45.192 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:45.192 [698/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:45.192 [699/705] Linking target lib/librte_node.so.24.0 00:01:45.192 [700/705] Linking target lib/librte_table.so.24.0 00:01:45.192 [701/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.452 [702/705] Linking target lib/librte_vhost.so.24.0 00:01:45.452 [703/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:47.368 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.629 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:47.629 17:11:55 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:47.629 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:47.629 [0/1] Installing files. 00:01:47.895 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:47.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:47.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:47.901 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.901 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:47.902 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:48.482 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:48.482 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:48.482 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.482 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:48.482 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.482 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.483 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.484 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.485 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:48.486 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:48.486 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:48.486 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:48.486 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:48.486 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:48.486 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:48.486 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:48.486 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:48.486 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:48.486 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:48.486 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:48.486 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:48.486 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:48.486 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:48.486 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:48.486 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:48.486 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:48.486 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:48.486 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:48.486 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:48.486 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:48.486 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:48.486 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:48.486 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:48.486 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:48.486 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:48.486 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:48.486 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:48.486 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:48.486 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:48.486 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:48.486 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:48.486 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:48.486 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:48.486 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:48.486 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:48.486 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:48.486 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:48.486 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:48.486 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:48.486 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:48.486 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:48.486 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:48.486 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:48.486 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:48.486 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:48.486 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:48.486 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:48.486 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:48.486 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:48.486 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:48.486 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:48.486 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:48.486 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:48.486 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:48.487 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:48.487 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:48.487 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:48.487 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:48.487 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:48.487 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:48.487 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:48.487 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:48.487 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:48.487 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:48.487 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:48.487 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:48.487 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:48.487 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:48.487 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:48.487 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:48.487 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:48.487 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:48.487 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:48.487 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:48.487 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:48.487 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:48.487 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:48.487 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:48.487 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:48.487 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:48.487 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:48.487 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:48.487 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:48.487 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:48.487 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:48.487 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:48.487 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:48.487 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:48.487 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:48.487 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:48.487 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:48.487 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:48.487 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:48.487 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:48.487 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:48.487 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:48.487 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:48.487 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:48.487 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:48.487 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:48.487 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:48.487 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:48.487 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:48.487 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:48.487 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:48.487 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:48.487 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:48.487 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:48.487 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:48.487 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:48.487 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:48.487 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:48.487 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:48.487 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:48.487 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:48.487 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:48.487 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:48.487 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:48.487 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:48.487 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:48.487 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:48.487 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:48.487 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:48.487 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:48.487 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:48.487 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:48.487 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:48.487 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:48.487 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:48.487 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:48.487 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:48.487 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:48.487 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:48.487 17:11:56 -- common/autobuild_common.sh@192 -- $ uname -s 00:01:48.487 17:11:56 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:48.487 17:11:56 -- common/autobuild_common.sh@203 -- $ cat 00:01:48.487 17:11:56 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.487 00:01:48.487 real 0m25.520s 00:01:48.487 user 7m21.097s 00:01:48.487 sys 3m39.490s 00:01:48.487 17:11:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:48.487 17:11:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.487 ************************************ 00:01:48.487 END TEST build_native_dpdk 00:01:48.487 ************************************ 00:01:48.487 17:11:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.487 17:11:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.487 17:11:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.487 17:11:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.487 17:11:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.487 17:11:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.487 17:11:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.487 17:11:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:48.487 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:48.775 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.775 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:48.775 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:49.347 Using 'verbs' RDMA provider 00:02:04.843 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:17.074 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:17.074 Creating mk/config.mk...done. 00:02:17.074 Creating mk/cc.flags.mk...done. 00:02:17.074 Type 'make' to build. 00:02:17.074 17:12:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:17.074 17:12:25 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:17.074 17:12:25 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:17.074 17:12:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.074 ************************************ 00:02:17.074 START TEST make 00:02:17.074 ************************************ 00:02:17.074 17:12:25 -- common/autotest_common.sh@1104 -- $ make -j144 00:02:17.336 make[1]: Nothing to be done for 'all'. 00:02:18.718 The Meson build system 00:02:18.718 Version: 1.5.0 00:02:18.718 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:18.718 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.718 Build type: native build 00:02:18.718 Project name: libvfio-user 00:02:18.718 Project version: 0.0.1 00:02:18.718 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.718 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:18.718 Host machine cpu family: x86_64 00:02:18.718 Host machine cpu: x86_64 00:02:18.718 Run-time dependency threads found: YES 00:02:18.718 Library dl found: YES 00:02:18.718 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.718 Run-time dependency json-c found: YES 0.17 00:02:18.718 Run-time dependency cmocka found: YES 1.1.7 00:02:18.718 Program pytest-3 found: NO 00:02:18.718 Program flake8 found: NO 00:02:18.718 Program misspell-fixer found: NO 00:02:18.718 Program restructuredtext-lint found: NO 00:02:18.718 Program valgrind found: YES (/usr/bin/valgrind) 00:02:18.718 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.718 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.718 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.718 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.718 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:18.718 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:18.718 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.718 Build targets in project: 8 00:02:18.718 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:18.718 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:18.718 00:02:18.718 libvfio-user 0.0.1 00:02:18.718 00:02:18.718 User defined options 00:02:18.718 buildtype : debug 00:02:18.718 default_library: shared 00:02:18.718 libdir : /usr/local/lib 00:02:18.718 00:02:18.718 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.287 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.287 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:19.287 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:19.287 [3/37] Compiling C object samples/null.p/null.c.o 00:02:19.287 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:19.287 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:19.287 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:19.287 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:19.287 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:19.287 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:19.287 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:19.287 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:19.287 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:19.287 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:19.287 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:19.287 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:19.287 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:19.287 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:19.287 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:19.287 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:19.287 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:19.287 [21/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:19.287 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:19.287 [23/37] Compiling C object samples/server.p/server.c.o 00:02:19.287 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:19.287 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:19.287 [26/37] Compiling C object samples/client.p/client.c.o 00:02:19.287 [27/37] Linking target samples/client 00:02:19.287 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:19.287 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:19.548 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:19.548 [31/37] Linking target test/unit_tests 00:02:19.548 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:19.548 [33/37] Linking target samples/null 00:02:19.548 [34/37] Linking target samples/gpio-pci-idio-16 00:02:19.548 [35/37] Linking target samples/lspci 00:02:19.548 [36/37] Linking target samples/server 00:02:19.548 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:19.548 INFO: autodetecting backend as ninja 00:02:19.548 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.548 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:20.119 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:20.119 ninja: no work to do. 00:02:28.258 CC lib/log/log.o 00:02:28.258 CC lib/ut_mock/mock.o 00:02:28.258 CC lib/log/log_flags.o 00:02:28.258 CC lib/log/log_deprecated.o 00:02:28.258 CC lib/ut/ut.o 00:02:28.258 LIB libspdk_ut_mock.a 00:02:28.258 SO libspdk_ut_mock.so.5.0 00:02:28.258 LIB libspdk_log.a 00:02:28.258 LIB libspdk_ut.a 00:02:28.258 SO libspdk_log.so.6.1 00:02:28.258 SO libspdk_ut.so.1.0 00:02:28.258 SYMLINK libspdk_ut_mock.so 00:02:28.258 SYMLINK libspdk_ut.so 00:02:28.258 SYMLINK libspdk_log.so 00:02:28.258 CC lib/util/base64.o 00:02:28.258 CC lib/util/bit_array.o 00:02:28.258 CC lib/dma/dma.o 00:02:28.258 CC lib/util/cpuset.o 00:02:28.258 CC lib/util/crc16.o 00:02:28.258 CC lib/util/crc32.o 00:02:28.258 CXX lib/trace_parser/trace.o 00:02:28.258 CC lib/util/crc32c.o 00:02:28.258 CC lib/util/crc32_ieee.o 00:02:28.258 CC lib/util/crc64.o 00:02:28.258 CC lib/util/dif.o 00:02:28.258 CC lib/ioat/ioat.o 00:02:28.258 CC lib/util/fd.o 00:02:28.258 CC lib/util/file.o 00:02:28.258 CC lib/util/hexlify.o 00:02:28.258 CC lib/util/iov.o 00:02:28.258 CC lib/util/math.o 00:02:28.258 CC lib/util/pipe.o 00:02:28.258 CC lib/util/strerror_tls.o 00:02:28.258 CC lib/util/string.o 00:02:28.258 CC lib/util/uuid.o 00:02:28.258 CC lib/util/fd_group.o 00:02:28.258 CC lib/util/xor.o 00:02:28.258 CC lib/util/zipf.o 00:02:28.258 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.258 CC lib/vfio_user/host/vfio_user.o 00:02:28.258 LIB libspdk_dma.a 00:02:28.258 SO libspdk_dma.so.3.0 00:02:28.258 SYMLINK libspdk_dma.so 00:02:28.258 LIB libspdk_ioat.a 00:02:28.258 SO libspdk_ioat.so.6.0 00:02:28.258 LIB libspdk_vfio_user.a 00:02:28.258 SYMLINK libspdk_ioat.so 00:02:28.258 SO libspdk_vfio_user.so.4.0 00:02:28.258 LIB libspdk_util.a 00:02:28.258 SYMLINK libspdk_vfio_user.so 00:02:28.258 SO libspdk_util.so.8.0 00:02:28.258 SYMLINK libspdk_util.so 00:02:28.258 LIB libspdk_trace_parser.a 00:02:28.258 SO libspdk_trace_parser.so.4.0 00:02:28.258 CC lib/rdma/common.o 00:02:28.258 CC lib/rdma/rdma_verbs.o 00:02:28.258 CC lib/json/json_parse.o 00:02:28.258 CC lib/conf/conf.o 00:02:28.258 CC lib/json/json_util.o 00:02:28.258 CC lib/json/json_write.o 00:02:28.258 CC lib/env_dpdk/env.o 00:02:28.258 CC lib/env_dpdk/memory.o 00:02:28.258 CC lib/env_dpdk/pci.o 00:02:28.519 CC lib/env_dpdk/init.o 00:02:28.519 CC lib/vmd/vmd.o 00:02:28.519 CC lib/env_dpdk/threads.o 00:02:28.519 CC lib/vmd/led.o 00:02:28.519 CC lib/env_dpdk/pci_ioat.o 00:02:28.519 CC lib/idxd/idxd.o 00:02:28.519 CC lib/env_dpdk/pci_virtio.o 00:02:28.519 CC lib/idxd/idxd_user.o 00:02:28.519 CC lib/env_dpdk/pci_vmd.o 00:02:28.519 CC lib/idxd/idxd_kernel.o 00:02:28.519 CC lib/env_dpdk/pci_idxd.o 00:02:28.519 CC lib/env_dpdk/pci_event.o 00:02:28.519 CC lib/env_dpdk/sigbus_handler.o 00:02:28.519 CC lib/env_dpdk/pci_dpdk.o 00:02:28.519 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.519 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.519 SYMLINK libspdk_trace_parser.so 00:02:28.519 LIB libspdk_conf.a 00:02:28.519 SO libspdk_conf.so.5.0 00:02:28.780 LIB libspdk_json.a 00:02:28.780 LIB libspdk_rdma.a 00:02:28.780 SO libspdk_json.so.5.1 00:02:28.780 SO libspdk_rdma.so.5.0 00:02:28.780 SYMLINK libspdk_conf.so 00:02:28.780 SYMLINK libspdk_json.so 00:02:28.780 SYMLINK libspdk_rdma.so 00:02:28.780 LIB libspdk_idxd.a 00:02:28.780 SO libspdk_idxd.so.11.0 00:02:29.042 LIB libspdk_vmd.a 00:02:29.042 SYMLINK libspdk_idxd.so 00:02:29.042 CC lib/jsonrpc/jsonrpc_server.o 00:02:29.042 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:29.042 CC lib/jsonrpc/jsonrpc_client.o 00:02:29.042 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:29.042 SO libspdk_vmd.so.5.0 00:02:29.042 SYMLINK libspdk_vmd.so 00:02:29.303 LIB libspdk_jsonrpc.a 00:02:29.303 SO libspdk_jsonrpc.so.5.1 00:02:29.303 SYMLINK libspdk_jsonrpc.so 00:02:29.566 LIB libspdk_env_dpdk.a 00:02:29.566 CC lib/rpc/rpc.o 00:02:29.566 SO libspdk_env_dpdk.so.13.0 00:02:29.832 SYMLINK libspdk_env_dpdk.so 00:02:29.832 LIB libspdk_rpc.a 00:02:29.832 SO libspdk_rpc.so.5.0 00:02:29.832 SYMLINK libspdk_rpc.so 00:02:30.094 CC lib/trace/trace.o 00:02:30.094 CC lib/trace/trace_flags.o 00:02:30.094 CC lib/notify/notify.o 00:02:30.094 CC lib/trace/trace_rpc.o 00:02:30.094 CC lib/notify/notify_rpc.o 00:02:30.094 CC lib/sock/sock.o 00:02:30.094 CC lib/sock/sock_rpc.o 00:02:30.356 LIB libspdk_notify.a 00:02:30.356 LIB libspdk_trace.a 00:02:30.356 SO libspdk_notify.so.5.0 00:02:30.356 SO libspdk_trace.so.9.0 00:02:30.356 SYMLINK libspdk_notify.so 00:02:30.617 SYMLINK libspdk_trace.so 00:02:30.617 LIB libspdk_sock.a 00:02:30.617 SO libspdk_sock.so.8.0 00:02:30.617 SYMLINK libspdk_sock.so 00:02:30.617 CC lib/thread/thread.o 00:02:30.617 CC lib/thread/iobuf.o 00:02:30.877 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:30.877 CC lib/nvme/nvme_ctrlr.o 00:02:30.877 CC lib/nvme/nvme_fabric.o 00:02:30.877 CC lib/nvme/nvme_ns_cmd.o 00:02:30.877 CC lib/nvme/nvme_ns.o 00:02:30.877 CC lib/nvme/nvme_pcie_common.o 00:02:30.877 CC lib/nvme/nvme_pcie.o 00:02:30.877 CC lib/nvme/nvme_qpair.o 00:02:30.877 CC lib/nvme/nvme.o 00:02:30.877 CC lib/nvme/nvme_quirks.o 00:02:30.877 CC lib/nvme/nvme_transport.o 00:02:30.877 CC lib/nvme/nvme_discovery.o 00:02:30.877 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:30.877 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:30.877 CC lib/nvme/nvme_tcp.o 00:02:30.877 CC lib/nvme/nvme_opal.o 00:02:30.877 CC lib/nvme/nvme_io_msg.o 00:02:30.877 CC lib/nvme/nvme_poll_group.o 00:02:30.877 CC lib/nvme/nvme_zns.o 00:02:30.877 CC lib/nvme/nvme_cuse.o 00:02:30.877 CC lib/nvme/nvme_vfio_user.o 00:02:30.877 CC lib/nvme/nvme_rdma.o 00:02:32.264 LIB libspdk_thread.a 00:02:32.264 SO libspdk_thread.so.9.0 00:02:32.264 SYMLINK libspdk_thread.so 00:02:32.264 CC lib/accel/accel.o 00:02:32.264 CC lib/accel/accel_rpc.o 00:02:32.264 CC lib/blob/blobstore.o 00:02:32.264 CC lib/blob/request.o 00:02:32.264 CC lib/accel/accel_sw.o 00:02:32.525 CC lib/blob/zeroes.o 00:02:32.525 CC lib/blob/blob_bs_dev.o 00:02:32.525 CC lib/virtio/virtio.o 00:02:32.525 CC lib/vfu_tgt/tgt_endpoint.o 00:02:32.525 CC lib/virtio/virtio_vhost_user.o 00:02:32.525 CC lib/vfu_tgt/tgt_rpc.o 00:02:32.525 CC lib/init/json_config.o 00:02:32.525 CC lib/virtio/virtio_vfio_user.o 00:02:32.525 CC lib/init/subsystem.o 00:02:32.525 CC lib/virtio/virtio_pci.o 00:02:32.525 CC lib/init/subsystem_rpc.o 00:02:32.525 CC lib/init/rpc.o 00:02:32.786 LIB libspdk_init.a 00:02:32.786 SO libspdk_init.so.4.0 00:02:32.786 LIB libspdk_nvme.a 00:02:32.786 LIB libspdk_vfu_tgt.a 00:02:32.786 LIB libspdk_virtio.a 00:02:32.786 SO libspdk_vfu_tgt.so.2.0 00:02:32.786 SO libspdk_virtio.so.6.0 00:02:32.786 SYMLINK libspdk_init.so 00:02:32.786 SYMLINK libspdk_vfu_tgt.so 00:02:32.786 SO libspdk_nvme.so.12.0 00:02:32.786 SYMLINK libspdk_virtio.so 00:02:33.047 CC lib/event/app.o 00:02:33.047 CC lib/event/reactor.o 00:02:33.047 CC lib/event/log_rpc.o 00:02:33.047 CC lib/event/app_rpc.o 00:02:33.047 CC lib/event/scheduler_static.o 00:02:33.047 SYMLINK libspdk_nvme.so 00:02:33.309 LIB libspdk_accel.a 00:02:33.309 SO libspdk_accel.so.14.0 00:02:33.309 LIB libspdk_event.a 00:02:33.309 SYMLINK libspdk_accel.so 00:02:33.570 SO libspdk_event.so.12.0 00:02:33.570 SYMLINK libspdk_event.so 00:02:33.570 CC lib/bdev/bdev.o 00:02:33.570 CC lib/bdev/bdev_rpc.o 00:02:33.570 CC lib/bdev/bdev_zone.o 00:02:33.570 CC lib/bdev/part.o 00:02:33.570 CC lib/bdev/scsi_nvme.o 00:02:34.955 LIB libspdk_blob.a 00:02:34.955 SO libspdk_blob.so.10.1 00:02:34.955 SYMLINK libspdk_blob.so 00:02:35.216 CC lib/blobfs/blobfs.o 00:02:35.216 CC lib/lvol/lvol.o 00:02:35.216 CC lib/blobfs/tree.o 00:02:36.157 LIB libspdk_blobfs.a 00:02:36.157 LIB libspdk_bdev.a 00:02:36.157 SO libspdk_blobfs.so.9.0 00:02:36.157 SO libspdk_bdev.so.14.0 00:02:36.157 LIB libspdk_lvol.a 00:02:36.157 SYMLINK libspdk_blobfs.so 00:02:36.157 SO libspdk_lvol.so.9.1 00:02:36.157 SYMLINK libspdk_bdev.so 00:02:36.157 SYMLINK libspdk_lvol.so 00:02:36.417 CC lib/ftl/ftl_core.o 00:02:36.417 CC lib/ftl/ftl_init.o 00:02:36.417 CC lib/nvmf/ctrlr.o 00:02:36.417 CC lib/ftl/ftl_layout.o 00:02:36.417 CC lib/nvmf/ctrlr_discovery.o 00:02:36.417 CC lib/ftl/ftl_debug.o 00:02:36.417 CC lib/nvmf/ctrlr_bdev.o 00:02:36.417 CC lib/ftl/ftl_io.o 00:02:36.417 CC lib/nbd/nbd.o 00:02:36.417 CC lib/ftl/ftl_sb.o 00:02:36.418 CC lib/ublk/ublk.o 00:02:36.418 CC lib/nbd/nbd_rpc.o 00:02:36.418 CC lib/nvmf/subsystem.o 00:02:36.418 CC lib/nvmf/nvmf.o 00:02:36.418 CC lib/ftl/ftl_l2p.o 00:02:36.418 CC lib/ublk/ublk_rpc.o 00:02:36.418 CC lib/scsi/dev.o 00:02:36.418 CC lib/ftl/ftl_l2p_flat.o 00:02:36.418 CC lib/nvmf/nvmf_rpc.o 00:02:36.418 CC lib/scsi/lun.o 00:02:36.418 CC lib/ftl/ftl_nv_cache.o 00:02:36.418 CC lib/scsi/port.o 00:02:36.418 CC lib/nvmf/transport.o 00:02:36.418 CC lib/nvmf/tcp.o 00:02:36.418 CC lib/ftl/ftl_band.o 00:02:36.418 CC lib/scsi/scsi.o 00:02:36.418 CC lib/nvmf/vfio_user.o 00:02:36.418 CC lib/ftl/ftl_band_ops.o 00:02:36.418 CC lib/scsi/scsi_bdev.o 00:02:36.418 CC lib/ftl/ftl_writer.o 00:02:36.418 CC lib/nvmf/rdma.o 00:02:36.418 CC lib/scsi/scsi_pr.o 00:02:36.418 CC lib/scsi/scsi_rpc.o 00:02:36.418 CC lib/ftl/ftl_rq.o 00:02:36.418 CC lib/ftl/ftl_reloc.o 00:02:36.418 CC lib/scsi/task.o 00:02:36.418 CC lib/ftl/ftl_l2p_cache.o 00:02:36.418 CC lib/ftl/ftl_p2l.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.418 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.418 CC lib/ftl/utils/ftl_conf.o 00:02:36.418 CC lib/ftl/utils/ftl_md.o 00:02:36.418 CC lib/ftl/utils/ftl_mempool.o 00:02:36.418 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.418 CC lib/ftl/utils/ftl_property.o 00:02:36.418 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.418 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.418 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.418 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.418 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.418 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.418 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.418 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.418 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.418 CC lib/ftl/base/ftl_base_dev.o 00:02:36.418 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.418 CC lib/ftl/base/ftl_base_bdev.o 00:02:36.418 CC lib/ftl/ftl_trace.o 00:02:36.989 LIB libspdk_nbd.a 00:02:36.989 LIB libspdk_scsi.a 00:02:36.989 SO libspdk_nbd.so.6.0 00:02:36.989 SO libspdk_scsi.so.8.0 00:02:36.989 SYMLINK libspdk_nbd.so 00:02:36.989 LIB libspdk_ublk.a 00:02:36.989 SO libspdk_ublk.so.2.0 00:02:36.989 SYMLINK libspdk_scsi.so 00:02:36.989 SYMLINK libspdk_ublk.so 00:02:37.251 CC lib/iscsi/conn.o 00:02:37.251 CC lib/iscsi/init_grp.o 00:02:37.251 CC lib/iscsi/iscsi.o 00:02:37.251 CC lib/vhost/vhost.o 00:02:37.251 CC lib/iscsi/md5.o 00:02:37.251 CC lib/vhost/vhost_rpc.o 00:02:37.251 CC lib/iscsi/param.o 00:02:37.251 CC lib/vhost/vhost_scsi.o 00:02:37.251 CC lib/iscsi/portal_grp.o 00:02:37.251 CC lib/vhost/vhost_blk.o 00:02:37.251 CC lib/vhost/rte_vhost_user.o 00:02:37.251 CC lib/iscsi/tgt_node.o 00:02:37.251 CC lib/iscsi/iscsi_subsystem.o 00:02:37.251 CC lib/iscsi/iscsi_rpc.o 00:02:37.251 CC lib/iscsi/task.o 00:02:37.251 LIB libspdk_ftl.a 00:02:37.513 SO libspdk_ftl.so.8.0 00:02:37.775 SYMLINK libspdk_ftl.so 00:02:38.349 LIB libspdk_nvmf.a 00:02:38.350 LIB libspdk_vhost.a 00:02:38.350 SO libspdk_nvmf.so.17.0 00:02:38.350 SO libspdk_vhost.so.7.1 00:02:38.350 SYMLINK libspdk_vhost.so 00:02:38.611 SYMLINK libspdk_nvmf.so 00:02:38.611 LIB libspdk_iscsi.a 00:02:38.611 SO libspdk_iscsi.so.7.0 00:02:38.611 SYMLINK libspdk_iscsi.so 00:02:39.184 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.184 CC module/vfu_device/vfu_virtio.o 00:02:39.184 CC module/vfu_device/vfu_virtio_blk.o 00:02:39.184 CC module/vfu_device/vfu_virtio_scsi.o 00:02:39.184 CC module/vfu_device/vfu_virtio_rpc.o 00:02:39.184 CC module/blob/bdev/blob_bdev.o 00:02:39.184 CC module/accel/error/accel_error.o 00:02:39.184 CC module/accel/error/accel_error_rpc.o 00:02:39.184 CC module/sock/posix/posix.o 00:02:39.184 CC module/accel/ioat/accel_ioat.o 00:02:39.184 CC module/accel/dsa/accel_dsa.o 00:02:39.184 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.184 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.184 CC module/accel/iaa/accel_iaa.o 00:02:39.184 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.184 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.184 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.184 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.184 LIB libspdk_env_dpdk_rpc.a 00:02:39.184 SO libspdk_env_dpdk_rpc.so.5.0 00:02:39.446 SYMLINK libspdk_env_dpdk_rpc.so 00:02:39.446 LIB libspdk_scheduler_gscheduler.a 00:02:39.446 LIB libspdk_scheduler_dpdk_governor.a 00:02:39.446 LIB libspdk_accel_error.a 00:02:39.446 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:39.446 LIB libspdk_scheduler_dynamic.a 00:02:39.446 LIB libspdk_accel_ioat.a 00:02:39.446 SO libspdk_scheduler_gscheduler.so.3.0 00:02:39.446 LIB libspdk_accel_iaa.a 00:02:39.446 LIB libspdk_accel_dsa.a 00:02:39.446 SO libspdk_accel_error.so.1.0 00:02:39.446 SO libspdk_scheduler_dynamic.so.3.0 00:02:39.446 SO libspdk_accel_ioat.so.5.0 00:02:39.446 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:39.446 LIB libspdk_blob_bdev.a 00:02:39.446 SO libspdk_accel_iaa.so.2.0 00:02:39.446 SO libspdk_accel_dsa.so.4.0 00:02:39.446 SYMLINK libspdk_scheduler_gscheduler.so 00:02:39.446 SO libspdk_blob_bdev.so.10.1 00:02:39.446 SYMLINK libspdk_scheduler_dynamic.so 00:02:39.446 SYMLINK libspdk_accel_error.so 00:02:39.446 SYMLINK libspdk_accel_ioat.so 00:02:39.446 SYMLINK libspdk_accel_dsa.so 00:02:39.446 SYMLINK libspdk_accel_iaa.so 00:02:39.707 SYMLINK libspdk_blob_bdev.so 00:02:39.707 LIB libspdk_vfu_device.a 00:02:39.707 SO libspdk_vfu_device.so.2.0 00:02:39.707 SYMLINK libspdk_vfu_device.so 00:02:39.968 LIB libspdk_sock_posix.a 00:02:39.968 SO libspdk_sock_posix.so.5.0 00:02:39.968 CC module/bdev/delay/vbdev_delay.o 00:02:39.968 CC module/blobfs/bdev/blobfs_bdev.o 00:02:39.968 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:39.968 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:39.968 CC module/bdev/gpt/vbdev_gpt.o 00:02:39.968 CC module/bdev/error/vbdev_error.o 00:02:39.968 CC module/bdev/gpt/gpt.o 00:02:39.968 CC module/bdev/error/vbdev_error_rpc.o 00:02:39.968 CC module/bdev/null/bdev_null.o 00:02:39.969 CC module/bdev/lvol/vbdev_lvol.o 00:02:39.969 CC module/bdev/aio/bdev_aio.o 00:02:39.969 CC module/bdev/null/bdev_null_rpc.o 00:02:39.969 CC module/bdev/aio/bdev_aio_rpc.o 00:02:39.969 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:39.969 CC module/bdev/passthru/vbdev_passthru.o 00:02:39.969 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:39.969 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:39.969 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:39.969 CC module/bdev/malloc/bdev_malloc.o 00:02:39.969 CC module/bdev/raid/bdev_raid.o 00:02:39.969 CC module/bdev/split/vbdev_split.o 00:02:39.969 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:39.969 CC module/bdev/raid/bdev_raid_rpc.o 00:02:39.969 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:39.969 CC module/bdev/split/vbdev_split_rpc.o 00:02:39.969 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:39.969 CC module/bdev/raid/bdev_raid_sb.o 00:02:39.969 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:39.969 CC module/bdev/nvme/bdev_nvme.o 00:02:39.969 CC module/bdev/raid/raid0.o 00:02:39.969 CC module/bdev/ftl/bdev_ftl.o 00:02:39.969 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:39.969 CC module/bdev/iscsi/bdev_iscsi.o 00:02:39.969 CC module/bdev/raid/raid1.o 00:02:39.969 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:39.969 CC module/bdev/nvme/nvme_rpc.o 00:02:39.969 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:39.969 CC module/bdev/raid/concat.o 00:02:39.969 CC module/bdev/nvme/bdev_mdns_client.o 00:02:39.969 CC module/bdev/nvme/vbdev_opal.o 00:02:39.969 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:39.969 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:39.969 SYMLINK libspdk_sock_posix.so 00:02:40.229 LIB libspdk_blobfs_bdev.a 00:02:40.229 SO libspdk_blobfs_bdev.so.5.0 00:02:40.229 LIB libspdk_bdev_null.a 00:02:40.229 LIB libspdk_bdev_gpt.a 00:02:40.229 LIB libspdk_bdev_error.a 00:02:40.229 LIB libspdk_bdev_passthru.a 00:02:40.229 LIB libspdk_bdev_split.a 00:02:40.229 SO libspdk_bdev_null.so.5.0 00:02:40.229 SYMLINK libspdk_blobfs_bdev.so 00:02:40.229 SO libspdk_bdev_error.so.5.0 00:02:40.229 SO libspdk_bdev_gpt.so.5.0 00:02:40.229 SO libspdk_bdev_passthru.so.5.0 00:02:40.229 SO libspdk_bdev_split.so.5.0 00:02:40.229 LIB libspdk_bdev_ftl.a 00:02:40.490 LIB libspdk_bdev_zone_block.a 00:02:40.490 LIB libspdk_bdev_delay.a 00:02:40.490 LIB libspdk_bdev_aio.a 00:02:40.490 SO libspdk_bdev_ftl.so.5.0 00:02:40.490 SYMLINK libspdk_bdev_null.so 00:02:40.490 LIB libspdk_bdev_malloc.a 00:02:40.490 SYMLINK libspdk_bdev_error.so 00:02:40.490 SO libspdk_bdev_zone_block.so.5.0 00:02:40.490 SYMLINK libspdk_bdev_passthru.so 00:02:40.490 SYMLINK libspdk_bdev_gpt.so 00:02:40.490 LIB libspdk_bdev_iscsi.a 00:02:40.490 SYMLINK libspdk_bdev_split.so 00:02:40.490 SO libspdk_bdev_delay.so.5.0 00:02:40.490 SO libspdk_bdev_aio.so.5.0 00:02:40.490 SO libspdk_bdev_malloc.so.5.0 00:02:40.490 SO libspdk_bdev_iscsi.so.5.0 00:02:40.490 SYMLINK libspdk_bdev_ftl.so 00:02:40.490 SYMLINK libspdk_bdev_zone_block.so 00:02:40.490 SYMLINK libspdk_bdev_delay.so 00:02:40.490 LIB libspdk_bdev_lvol.a 00:02:40.490 SYMLINK libspdk_bdev_aio.so 00:02:40.490 SYMLINK libspdk_bdev_malloc.so 00:02:40.490 SYMLINK libspdk_bdev_iscsi.so 00:02:40.490 SO libspdk_bdev_lvol.so.5.0 00:02:40.490 LIB libspdk_bdev_virtio.a 00:02:40.490 SO libspdk_bdev_virtio.so.5.0 00:02:40.490 SYMLINK libspdk_bdev_lvol.so 00:02:40.750 SYMLINK libspdk_bdev_virtio.so 00:02:40.750 LIB libspdk_bdev_raid.a 00:02:41.011 SO libspdk_bdev_raid.so.5.0 00:02:41.011 SYMLINK libspdk_bdev_raid.so 00:02:41.953 LIB libspdk_bdev_nvme.a 00:02:41.953 SO libspdk_bdev_nvme.so.6.0 00:02:42.214 SYMLINK libspdk_bdev_nvme.so 00:02:42.787 CC module/event/subsystems/vmd/vmd.o 00:02:42.787 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:42.787 CC module/event/subsystems/scheduler/scheduler.o 00:02:42.787 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:42.787 CC module/event/subsystems/sock/sock.o 00:02:42.787 CC module/event/subsystems/iobuf/iobuf.o 00:02:42.787 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:42.787 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:42.787 LIB libspdk_event_scheduler.a 00:02:42.787 LIB libspdk_event_sock.a 00:02:42.787 LIB libspdk_event_vmd.a 00:02:42.787 LIB libspdk_event_vhost_blk.a 00:02:42.787 LIB libspdk_event_vfu_tgt.a 00:02:42.787 LIB libspdk_event_iobuf.a 00:02:42.787 SO libspdk_event_scheduler.so.3.0 00:02:42.787 SO libspdk_event_sock.so.4.0 00:02:42.787 SO libspdk_event_vhost_blk.so.2.0 00:02:42.787 SO libspdk_event_vmd.so.5.0 00:02:42.787 SO libspdk_event_vfu_tgt.so.2.0 00:02:42.787 SO libspdk_event_iobuf.so.2.0 00:02:42.787 SYMLINK libspdk_event_scheduler.so 00:02:42.787 SYMLINK libspdk_event_sock.so 00:02:42.787 SYMLINK libspdk_event_vhost_blk.so 00:02:42.787 SYMLINK libspdk_event_vfu_tgt.so 00:02:43.048 SYMLINK libspdk_event_vmd.so 00:02:43.048 SYMLINK libspdk_event_iobuf.so 00:02:43.048 CC module/event/subsystems/accel/accel.o 00:02:43.308 LIB libspdk_event_accel.a 00:02:43.308 SO libspdk_event_accel.so.5.0 00:02:43.308 SYMLINK libspdk_event_accel.so 00:02:43.569 CC module/event/subsystems/bdev/bdev.o 00:02:43.830 LIB libspdk_event_bdev.a 00:02:43.830 SO libspdk_event_bdev.so.5.0 00:02:43.830 SYMLINK libspdk_event_bdev.so 00:02:44.091 CC module/event/subsystems/nbd/nbd.o 00:02:44.091 CC module/event/subsystems/scsi/scsi.o 00:02:44.091 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:44.091 CC module/event/subsystems/ublk/ublk.o 00:02:44.091 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:44.353 LIB libspdk_event_nbd.a 00:02:44.353 LIB libspdk_event_ublk.a 00:02:44.353 LIB libspdk_event_scsi.a 00:02:44.353 SO libspdk_event_nbd.so.5.0 00:02:44.353 SO libspdk_event_ublk.so.2.0 00:02:44.353 SO libspdk_event_scsi.so.5.0 00:02:44.353 LIB libspdk_event_nvmf.a 00:02:44.353 SYMLINK libspdk_event_nbd.so 00:02:44.354 SYMLINK libspdk_event_ublk.so 00:02:44.354 SYMLINK libspdk_event_scsi.so 00:02:44.354 SO libspdk_event_nvmf.so.5.0 00:02:44.614 SYMLINK libspdk_event_nvmf.so 00:02:44.614 CC module/event/subsystems/iscsi/iscsi.o 00:02:44.614 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:44.876 LIB libspdk_event_vhost_scsi.a 00:02:44.876 SO libspdk_event_vhost_scsi.so.2.0 00:02:44.876 LIB libspdk_event_iscsi.a 00:02:44.876 SO libspdk_event_iscsi.so.5.0 00:02:44.876 SYMLINK libspdk_event_vhost_scsi.so 00:02:45.136 SYMLINK libspdk_event_iscsi.so 00:02:45.136 SO libspdk.so.5.0 00:02:45.136 SYMLINK libspdk.so 00:02:45.395 CC app/spdk_lspci/spdk_lspci.o 00:02:45.395 CC app/spdk_nvme_perf/perf.o 00:02:45.395 CC app/spdk_nvme_identify/identify.o 00:02:45.395 TEST_HEADER include/spdk/accel.h 00:02:45.395 TEST_HEADER include/spdk/accel_module.h 00:02:45.396 TEST_HEADER include/spdk/barrier.h 00:02:45.396 TEST_HEADER include/spdk/bdev.h 00:02:45.396 CC app/trace_record/trace_record.o 00:02:45.396 CC app/spdk_top/spdk_top.o 00:02:45.396 TEST_HEADER include/spdk/assert.h 00:02:45.396 TEST_HEADER include/spdk/base64.h 00:02:45.396 TEST_HEADER include/spdk/bdev_module.h 00:02:45.396 TEST_HEADER include/spdk/bdev_zone.h 00:02:45.396 TEST_HEADER include/spdk/bit_pool.h 00:02:45.396 TEST_HEADER include/spdk/bit_array.h 00:02:45.396 TEST_HEADER include/spdk/blob_bdev.h 00:02:45.396 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.396 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:45.396 TEST_HEADER include/spdk/blobfs.h 00:02:45.396 CXX app/trace/trace.o 00:02:45.396 TEST_HEADER include/spdk/conf.h 00:02:45.396 TEST_HEADER include/spdk/config.h 00:02:45.396 TEST_HEADER include/spdk/blob.h 00:02:45.396 TEST_HEADER include/spdk/cpuset.h 00:02:45.396 TEST_HEADER include/spdk/crc16.h 00:02:45.396 TEST_HEADER include/spdk/crc32.h 00:02:45.396 CC test/rpc_client/rpc_client_test.o 00:02:45.396 TEST_HEADER include/spdk/dif.h 00:02:45.396 TEST_HEADER include/spdk/dma.h 00:02:45.396 TEST_HEADER include/spdk/crc64.h 00:02:45.396 TEST_HEADER include/spdk/endian.h 00:02:45.396 TEST_HEADER include/spdk/event.h 00:02:45.396 TEST_HEADER include/spdk/env_dpdk.h 00:02:45.396 TEST_HEADER include/spdk/fd_group.h 00:02:45.396 TEST_HEADER include/spdk/env.h 00:02:45.396 TEST_HEADER include/spdk/fd.h 00:02:45.396 CC app/nvmf_tgt/nvmf_main.o 00:02:45.396 TEST_HEADER include/spdk/gpt_spec.h 00:02:45.396 CC app/spdk_dd/spdk_dd.o 00:02:45.396 TEST_HEADER include/spdk/file.h 00:02:45.396 TEST_HEADER include/spdk/hexlify.h 00:02:45.396 TEST_HEADER include/spdk/ftl.h 00:02:45.396 TEST_HEADER include/spdk/histogram_data.h 00:02:45.396 TEST_HEADER include/spdk/idxd_spec.h 00:02:45.396 TEST_HEADER include/spdk/idxd.h 00:02:45.396 TEST_HEADER include/spdk/init.h 00:02:45.396 TEST_HEADER include/spdk/ioat.h 00:02:45.396 TEST_HEADER include/spdk/ioat_spec.h 00:02:45.396 CC app/vhost/vhost.o 00:02:45.396 TEST_HEADER include/spdk/jsonrpc.h 00:02:45.396 TEST_HEADER include/spdk/json.h 00:02:45.396 TEST_HEADER include/spdk/iscsi_spec.h 00:02:45.396 TEST_HEADER include/spdk/likely.h 00:02:45.396 TEST_HEADER include/spdk/log.h 00:02:45.396 TEST_HEADER include/spdk/lvol.h 00:02:45.396 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:45.396 CC app/iscsi_tgt/iscsi_tgt.o 00:02:45.396 TEST_HEADER include/spdk/memory.h 00:02:45.663 TEST_HEADER include/spdk/mmio.h 00:02:45.663 TEST_HEADER include/spdk/nbd.h 00:02:45.663 TEST_HEADER include/spdk/notify.h 00:02:45.663 TEST_HEADER include/spdk/nvme.h 00:02:45.663 CC app/spdk_tgt/spdk_tgt.o 00:02:45.663 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:45.663 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:45.663 TEST_HEADER include/spdk/nvme_intel.h 00:02:45.663 TEST_HEADER include/spdk/nvme_zns.h 00:02:45.663 TEST_HEADER include/spdk/nvme_spec.h 00:02:45.663 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:45.663 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:45.663 TEST_HEADER include/spdk/nvmf.h 00:02:45.663 TEST_HEADER include/spdk/nvmf_spec.h 00:02:45.663 TEST_HEADER include/spdk/opal.h 00:02:45.663 TEST_HEADER include/spdk/nvmf_transport.h 00:02:45.663 TEST_HEADER include/spdk/opal_spec.h 00:02:45.663 TEST_HEADER include/spdk/pci_ids.h 00:02:45.663 TEST_HEADER include/spdk/pipe.h 00:02:45.663 TEST_HEADER include/spdk/queue.h 00:02:45.663 TEST_HEADER include/spdk/reduce.h 00:02:45.663 TEST_HEADER include/spdk/scheduler.h 00:02:45.663 TEST_HEADER include/spdk/rpc.h 00:02:45.663 TEST_HEADER include/spdk/scsi.h 00:02:45.663 TEST_HEADER include/spdk/scsi_spec.h 00:02:45.663 TEST_HEADER include/spdk/sock.h 00:02:45.663 TEST_HEADER include/spdk/string.h 00:02:45.663 TEST_HEADER include/spdk/stdinc.h 00:02:45.663 TEST_HEADER include/spdk/thread.h 00:02:45.663 TEST_HEADER include/spdk/trace.h 00:02:45.663 TEST_HEADER include/spdk/trace_parser.h 00:02:45.663 TEST_HEADER include/spdk/ublk.h 00:02:45.663 TEST_HEADER include/spdk/tree.h 00:02:45.663 TEST_HEADER include/spdk/util.h 00:02:45.663 TEST_HEADER include/spdk/uuid.h 00:02:45.663 TEST_HEADER include/spdk/version.h 00:02:45.663 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:45.663 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:45.663 TEST_HEADER include/spdk/vhost.h 00:02:45.663 TEST_HEADER include/spdk/xor.h 00:02:45.663 TEST_HEADER include/spdk/zipf.h 00:02:45.663 CXX test/cpp_headers/accel.o 00:02:45.663 TEST_HEADER include/spdk/vmd.h 00:02:45.663 CXX test/cpp_headers/assert.o 00:02:45.663 CXX test/cpp_headers/accel_module.o 00:02:45.663 CXX test/cpp_headers/barrier.o 00:02:45.663 CXX test/cpp_headers/bdev.o 00:02:45.663 CXX test/cpp_headers/bdev_zone.o 00:02:45.663 CXX test/cpp_headers/bdev_module.o 00:02:45.663 CXX test/cpp_headers/base64.o 00:02:45.663 CXX test/cpp_headers/bit_array.o 00:02:45.663 CXX test/cpp_headers/bit_pool.o 00:02:45.663 CXX test/cpp_headers/blobfs.o 00:02:45.663 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.663 CXX test/cpp_headers/blob_bdev.o 00:02:45.663 CXX test/cpp_headers/conf.o 00:02:45.663 CXX test/cpp_headers/blob.o 00:02:45.663 CXX test/cpp_headers/config.o 00:02:45.663 CXX test/cpp_headers/cpuset.o 00:02:45.663 CC examples/nvme/hotplug/hotplug.o 00:02:45.663 CXX test/cpp_headers/crc32.o 00:02:45.663 CXX test/cpp_headers/crc16.o 00:02:45.663 CXX test/cpp_headers/crc64.o 00:02:45.663 CXX test/cpp_headers/dma.o 00:02:45.663 CXX test/cpp_headers/dif.o 00:02:45.663 CXX test/cpp_headers/endian.o 00:02:45.663 CXX test/cpp_headers/env.o 00:02:45.663 CXX test/cpp_headers/env_dpdk.o 00:02:45.663 CXX test/cpp_headers/fd_group.o 00:02:45.663 CXX test/cpp_headers/event.o 00:02:45.663 CC examples/idxd/perf/perf.o 00:02:45.663 CC examples/sock/hello_world/hello_sock.o 00:02:45.663 CC examples/nvme/hello_world/hello_world.o 00:02:45.663 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:45.663 CC test/app/stub/stub.o 00:02:45.663 CXX test/cpp_headers/fd.o 00:02:45.663 CXX test/cpp_headers/file.o 00:02:45.663 CXX test/cpp_headers/histogram_data.o 00:02:45.663 CXX test/cpp_headers/ftl.o 00:02:45.663 CXX test/cpp_headers/hexlify.o 00:02:45.663 CC examples/accel/perf/accel_perf.o 00:02:45.663 CXX test/cpp_headers/gpt_spec.o 00:02:45.663 CXX test/cpp_headers/idxd.o 00:02:45.663 CC examples/nvme/arbitration/arbitration.o 00:02:45.663 CXX test/cpp_headers/idxd_spec.o 00:02:45.663 CXX test/cpp_headers/ioat.o 00:02:45.663 CXX test/cpp_headers/ioat_spec.o 00:02:45.663 CXX test/cpp_headers/init.o 00:02:45.663 CXX test/cpp_headers/iscsi_spec.o 00:02:45.663 CXX test/cpp_headers/json.o 00:02:45.663 CC test/env/memory/memory_ut.o 00:02:45.663 CC test/nvme/overhead/overhead.o 00:02:45.663 CC examples/util/zipf/zipf.o 00:02:45.663 CC examples/vmd/lsvmd/lsvmd.o 00:02:45.663 CC test/env/vtophys/vtophys.o 00:02:45.663 CXX test/cpp_headers/jsonrpc.o 00:02:45.663 CC test/nvme/startup/startup.o 00:02:45.663 CXX test/cpp_headers/likely.o 00:02:45.663 CXX test/cpp_headers/lvol.o 00:02:45.663 CXX test/cpp_headers/log.o 00:02:45.663 CXX test/cpp_headers/mmio.o 00:02:45.663 CXX test/cpp_headers/memory.o 00:02:45.663 CC examples/ioat/perf/perf.o 00:02:45.663 CXX test/cpp_headers/nbd.o 00:02:45.663 CXX test/cpp_headers/notify.o 00:02:45.663 CXX test/cpp_headers/nvme_ocssd.o 00:02:45.663 CXX test/cpp_headers/nvme.o 00:02:45.663 CXX test/cpp_headers/nvme_intel.o 00:02:45.663 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:45.663 CC app/fio/nvme/fio_plugin.o 00:02:45.663 CXX test/cpp_headers/nvme_spec.o 00:02:45.663 CXX test/cpp_headers/nvme_zns.o 00:02:45.663 CC examples/vmd/led/led.o 00:02:45.663 CC test/nvme/compliance/nvme_compliance.o 00:02:45.663 CC test/nvme/reset/reset.o 00:02:45.663 CXX test/cpp_headers/nvmf_cmd.o 00:02:45.663 CC examples/nvme/reconnect/reconnect.o 00:02:45.664 CC test/nvme/fdp/fdp.o 00:02:45.664 CC test/app/histogram_perf/histogram_perf.o 00:02:45.664 CC test/env/pci/pci_ut.o 00:02:45.664 CC test/event/event_perf/event_perf.o 00:02:45.664 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:45.664 CC test/nvme/simple_copy/simple_copy.o 00:02:45.664 CC examples/ioat/verify/verify.o 00:02:45.664 CC test/app/jsoncat/jsoncat.o 00:02:45.664 CXX test/cpp_headers/nvmf.o 00:02:45.664 CXX test/cpp_headers/nvmf_spec.o 00:02:45.664 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:45.664 CXX test/cpp_headers/nvmf_transport.o 00:02:45.664 CXX test/cpp_headers/opal.o 00:02:45.664 CXX test/cpp_headers/opal_spec.o 00:02:45.664 CC test/event/reactor/reactor.o 00:02:45.664 CC test/nvme/reserve/reserve.o 00:02:45.664 CXX test/cpp_headers/pci_ids.o 00:02:45.664 CC test/nvme/e2edp/nvme_dp.o 00:02:45.664 CC test/nvme/connect_stress/connect_stress.o 00:02:45.664 CC test/nvme/sgl/sgl.o 00:02:45.664 CC test/nvme/aer/aer.o 00:02:45.664 CC app/fio/bdev/fio_plugin.o 00:02:45.664 CXX test/cpp_headers/pipe.o 00:02:45.664 CC test/blobfs/mkfs/mkfs.o 00:02:45.664 CXX test/cpp_headers/reduce.o 00:02:45.664 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:45.664 CC test/bdev/bdevio/bdevio.o 00:02:45.664 CXX test/cpp_headers/rpc.o 00:02:45.664 CC test/thread/poller_perf/poller_perf.o 00:02:45.664 CC test/nvme/boot_partition/boot_partition.o 00:02:45.664 CC test/accel/dif/dif.o 00:02:45.664 CC test/dma/test_dma/test_dma.o 00:02:45.664 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.664 CXX test/cpp_headers/queue.o 00:02:45.664 CXX test/cpp_headers/scheduler.o 00:02:45.664 CC test/nvme/err_injection/err_injection.o 00:02:45.664 CC test/event/app_repeat/app_repeat.o 00:02:45.664 CC test/nvme/cuse/cuse.o 00:02:45.664 CC examples/nvme/abort/abort.o 00:02:45.664 CC examples/bdev/bdevperf/bdevperf.o 00:02:45.664 CC examples/blob/hello_world/hello_blob.o 00:02:45.664 CC examples/nvmf/nvmf/nvmf.o 00:02:45.664 CC test/nvme/fused_ordering/fused_ordering.o 00:02:45.664 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:45.664 CXX test/cpp_headers/scsi.o 00:02:45.664 CC examples/blob/cli/blobcli.o 00:02:45.664 CC test/event/reactor_perf/reactor_perf.o 00:02:45.664 CC test/event/scheduler/scheduler.o 00:02:45.664 CC examples/bdev/hello_world/hello_bdev.o 00:02:45.664 CC test/app/bdev_svc/bdev_svc.o 00:02:45.664 CC examples/thread/thread/thread_ex.o 00:02:45.664 CXX test/cpp_headers/scsi_spec.o 00:02:45.664 LINK spdk_lspci 00:02:45.932 CXX test/cpp_headers/sock.o 00:02:45.932 CC test/lvol/esnap/esnap.o 00:02:45.932 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.932 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:45.932 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.932 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:45.932 LINK rpc_client_test 00:02:46.200 LINK nvmf_tgt 00:02:46.200 LINK spdk_tgt 00:02:46.200 LINK vhost 00:02:46.200 LINK interrupt_tgt 00:02:46.200 LINK spdk_nvme_discover 00:02:46.200 LINK lsvmd 00:02:46.200 LINK zipf 00:02:46.200 LINK iscsi_tgt 00:02:46.200 LINK reactor 00:02:46.200 LINK spdk_trace_record 00:02:46.200 LINK poller_perf 00:02:46.200 LINK env_dpdk_post_init 00:02:46.200 LINK led 00:02:46.200 LINK histogram_perf 00:02:46.200 LINK event_perf 00:02:46.200 LINK stub 00:02:46.200 LINK connect_stress 00:02:46.200 LINK startup 00:02:46.200 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:46.200 LINK jsoncat 00:02:46.200 LINK boot_partition 00:02:46.200 LINK vtophys 00:02:46.459 LINK pmr_persistence 00:02:46.459 LINK reactor_perf 00:02:46.459 CXX test/cpp_headers/stdinc.o 00:02:46.459 CXX test/cpp_headers/string.o 00:02:46.459 LINK app_repeat 00:02:46.459 LINK mkfs 00:02:46.459 CXX test/cpp_headers/thread.o 00:02:46.459 LINK cmb_copy 00:02:46.459 LINK doorbell_aers 00:02:46.459 CXX test/cpp_headers/trace.o 00:02:46.459 LINK reserve 00:02:46.459 CXX test/cpp_headers/trace_parser.o 00:02:46.459 LINK verify 00:02:46.459 LINK hello_world 00:02:46.459 CXX test/cpp_headers/tree.o 00:02:46.459 CXX test/cpp_headers/ublk.o 00:02:46.459 CXX test/cpp_headers/util.o 00:02:46.459 LINK fused_ordering 00:02:46.459 CXX test/cpp_headers/uuid.o 00:02:46.459 LINK ioat_perf 00:02:46.459 LINK spdk_dd 00:02:46.459 CXX test/cpp_headers/version.o 00:02:46.459 CXX test/cpp_headers/vfio_user_pci.o 00:02:46.459 LINK scheduler 00:02:46.459 LINK simple_copy 00:02:46.459 CXX test/cpp_headers/vfio_user_spec.o 00:02:46.459 LINK err_injection 00:02:46.459 LINK sgl 00:02:46.459 CXX test/cpp_headers/vhost.o 00:02:46.459 CXX test/cpp_headers/vmd.o 00:02:46.459 CXX test/cpp_headers/xor.o 00:02:46.459 CXX test/cpp_headers/zipf.o 00:02:46.459 LINK hello_sock 00:02:46.459 LINK bdev_svc 00:02:46.459 LINK hotplug 00:02:46.459 LINK nvme_dp 00:02:46.459 LINK hello_bdev 00:02:46.459 LINK nvme_compliance 00:02:46.459 LINK hello_blob 00:02:46.459 LINK reset 00:02:46.459 LINK idxd_perf 00:02:46.459 LINK overhead 00:02:46.459 LINK nvmf 00:02:46.459 LINK thread 00:02:46.719 LINK fdp 00:02:46.719 LINK bdevio 00:02:46.719 LINK aer 00:02:46.719 LINK reconnect 00:02:46.719 LINK arbitration 00:02:46.719 LINK pci_ut 00:02:46.719 LINK test_dma 00:02:46.719 LINK dif 00:02:46.719 LINK spdk_trace 00:02:46.719 LINK abort 00:02:46.719 LINK spdk_bdev 00:02:46.719 LINK accel_perf 00:02:46.719 LINK nvme_fuzz 00:02:46.719 LINK blobcli 00:02:46.719 LINK nvme_manage 00:02:46.719 LINK spdk_nvme 00:02:46.980 LINK spdk_nvme_perf 00:02:46.980 LINK vhost_fuzz 00:02:46.980 LINK mem_callbacks 00:02:46.980 LINK spdk_nvme_identify 00:02:46.980 LINK bdevperf 00:02:46.980 LINK spdk_top 00:02:46.980 LINK memory_ut 00:02:47.241 LINK cuse 00:02:47.881 LINK iscsi_fuzz 00:02:50.426 LINK esnap 00:02:50.426 00:02:50.426 real 0m33.521s 00:02:50.426 user 5m10.417s 00:02:50.426 sys 3m18.277s 00:02:50.426 17:12:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:50.426 17:12:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.426 ************************************ 00:02:50.426 END TEST make 00:02:50.426 ************************************ 00:02:50.687 17:12:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.687 17:12:58 -- nvmf/common.sh@7 -- # uname -s 00:02:50.687 17:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.687 17:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.687 17:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.687 17:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.687 17:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.687 17:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.687 17:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.687 17:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.687 17:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.687 17:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.687 17:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:50.687 17:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:50.687 17:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.687 17:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.687 17:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:50.687 17:12:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:50.687 17:12:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.687 17:12:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.687 17:12:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.687 17:12:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.687 17:12:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.687 17:12:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.687 17:12:58 -- paths/export.sh@5 -- # export PATH 00:02:50.687 17:12:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.687 17:12:58 -- nvmf/common.sh@46 -- # : 0 00:02:50.687 17:12:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:50.687 17:12:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:50.687 17:12:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:50.687 17:12:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.687 17:12:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.687 17:12:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:50.687 17:12:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:50.687 17:12:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:50.687 17:12:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.687 17:12:58 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.687 17:12:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.687 17:12:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.687 17:12:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.687 17:12:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.688 17:12:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.688 17:12:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.688 17:12:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.688 17:12:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.688 17:12:59 -- spdk/autotest.sh@48 -- # udevadm_pid=2911812 00:02:50.688 17:12:59 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:50.688 17:12:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.688 17:12:59 -- spdk/autotest.sh@54 -- # echo 2911814 00:02:50.688 17:12:59 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:50.688 17:12:59 -- spdk/autotest.sh@56 -- # echo 2911815 00:02:50.688 17:12:59 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:50.688 17:12:59 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:50.688 17:12:59 -- spdk/autotest.sh@60 -- # echo 2911816 00:02:50.688 17:12:59 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:50.688 17:12:59 -- spdk/autotest.sh@62 -- # echo 2911817 00:02:50.688 17:12:59 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:50.688 17:12:59 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:50.688 17:12:59 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:50.688 17:12:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:50.688 17:12:59 -- common/autotest_common.sh@10 -- # set +x 00:02:50.688 17:12:59 -- spdk/autotest.sh@70 -- # create_test_list 00:02:50.688 17:12:59 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:50.688 17:12:59 -- common/autotest_common.sh@10 -- # set +x 00:02:50.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:50.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:50.688 17:12:59 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:50.688 17:12:59 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:50.688 17:12:59 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:50.688 17:12:59 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:50.688 17:12:59 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:50.688 17:12:59 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:50.688 17:12:59 -- common/autotest_common.sh@1440 -- # uname 00:02:50.688 17:12:59 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:50.688 17:12:59 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:50.688 17:12:59 -- common/autotest_common.sh@1460 -- # uname 00:02:50.688 17:12:59 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:50.688 17:12:59 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:50.688 17:12:59 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:50.688 17:12:59 -- spdk/autotest.sh@83 -- # hash lcov 00:02:50.688 17:12:59 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:50.688 17:12:59 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:50.688 --rc lcov_branch_coverage=1 00:02:50.688 --rc lcov_function_coverage=1 00:02:50.688 --rc genhtml_branch_coverage=1 00:02:50.688 --rc genhtml_function_coverage=1 00:02:50.688 --rc genhtml_legend=1 00:02:50.688 --rc geninfo_all_blocks=1 00:02:50.688 ' 00:02:50.688 17:12:59 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:50.688 --rc lcov_branch_coverage=1 00:02:50.688 --rc lcov_function_coverage=1 00:02:50.688 --rc genhtml_branch_coverage=1 00:02:50.688 --rc genhtml_function_coverage=1 00:02:50.688 --rc genhtml_legend=1 00:02:50.688 --rc geninfo_all_blocks=1 00:02:50.688 ' 00:02:50.688 17:12:59 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:50.688 --rc lcov_branch_coverage=1 00:02:50.688 --rc lcov_function_coverage=1 00:02:50.688 --rc genhtml_branch_coverage=1 00:02:50.688 --rc genhtml_function_coverage=1 00:02:50.688 --rc genhtml_legend=1 00:02:50.688 --rc geninfo_all_blocks=1 00:02:50.688 --no-external' 00:02:50.688 17:12:59 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:50.688 --rc lcov_branch_coverage=1 00:02:50.688 --rc lcov_function_coverage=1 00:02:50.688 --rc genhtml_branch_coverage=1 00:02:50.688 --rc genhtml_function_coverage=1 00:02:50.688 --rc genhtml_legend=1 00:02:50.688 --rc geninfo_all_blocks=1 00:02:50.688 --no-external' 00:02:50.688 17:12:59 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:50.688 lcov: LCOV version 1.15 00:02:50.688 17:12:59 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:52.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:52.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:52.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:52.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:52.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:52.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:52.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:07.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:07.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:07.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:07.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:07.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:07.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:20.014 17:13:27 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:20.014 17:13:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:20.014 17:13:27 -- common/autotest_common.sh@10 -- # set +x 00:03:20.014 17:13:27 -- spdk/autotest.sh@102 -- # rm -f 00:03:20.014 17:13:27 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.562 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.562 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:22.823 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.823 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:23.084 17:13:31 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:23.084 17:13:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:23.084 17:13:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:23.084 17:13:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:23.084 17:13:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:23.084 17:13:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:23.084 17:13:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:23.084 17:13:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.084 17:13:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:23.084 17:13:31 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:23.084 17:13:31 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:23.084 17:13:31 -- spdk/autotest.sh@121 -- # grep -v p 00:03:23.084 17:13:31 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:23.084 17:13:31 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:23.084 17:13:31 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:23.084 17:13:31 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:23.084 17:13:31 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:23.084 No valid GPT data, bailing 00:03:23.084 17:13:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:23.345 17:13:31 -- scripts/common.sh@393 -- # pt= 00:03:23.345 17:13:31 -- scripts/common.sh@394 -- # return 1 00:03:23.345 17:13:31 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:23.345 1+0 records in 00:03:23.345 1+0 records out 00:03:23.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00206666 s, 507 MB/s 00:03:23.345 17:13:31 -- spdk/autotest.sh@129 -- # sync 00:03:23.345 17:13:31 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:23.345 17:13:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:23.345 17:13:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:31.491 17:13:39 -- spdk/autotest.sh@135 -- # uname -s 00:03:31.491 17:13:39 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:31.491 17:13:39 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:31.491 17:13:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.491 17:13:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.491 17:13:39 -- common/autotest_common.sh@10 -- # set +x 00:03:31.491 ************************************ 00:03:31.491 START TEST setup.sh 00:03:31.491 ************************************ 00:03:31.491 17:13:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:31.491 * Looking for test storage... 00:03:31.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.491 17:13:39 -- setup/test-setup.sh@10 -- # uname -s 00:03:31.491 17:13:39 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:31.491 17:13:39 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:31.491 17:13:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.491 17:13:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.491 17:13:39 -- common/autotest_common.sh@10 -- # set +x 00:03:31.491 ************************************ 00:03:31.491 START TEST acl 00:03:31.491 ************************************ 00:03:31.491 17:13:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:31.753 * Looking for test storage... 00:03:31.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.754 17:13:40 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:31.754 17:13:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:31.754 17:13:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:31.754 17:13:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:31.754 17:13:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:31.754 17:13:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:31.754 17:13:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:31.754 17:13:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.754 17:13:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:31.754 17:13:40 -- setup/acl.sh@12 -- # devs=() 00:03:31.754 17:13:40 -- setup/acl.sh@12 -- # declare -a devs 00:03:31.754 17:13:40 -- setup/acl.sh@13 -- # drivers=() 00:03:31.754 17:13:40 -- setup/acl.sh@13 -- # declare -A drivers 00:03:31.754 17:13:40 -- setup/acl.sh@51 -- # setup reset 00:03:31.754 17:13:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.754 17:13:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.963 17:13:44 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:35.963 17:13:44 -- setup/acl.sh@16 -- # local dev driver 00:03:35.963 17:13:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.963 17:13:44 -- setup/acl.sh@15 -- # setup output status 00:03:35.963 17:13:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.963 17:13:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:39.268 Hugepages 00:03:39.268 node hugesize free / total 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # continue 00:03:39.268 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # continue 00:03:39.268 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # continue 00:03:39.268 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.268 00:03:39.268 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # continue 00:03:39.268 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.268 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.268 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.268 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.268 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.268 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:39.268 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.268 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.529 17:13:47 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.529 17:13:47 -- setup/acl.sh@20 -- # continue 00:03:39.529 17:13:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.529 17:13:47 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:39.529 17:13:47 -- setup/acl.sh@54 -- # run_test denied denied 00:03:39.529 17:13:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.529 17:13:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.529 17:13:47 -- common/autotest_common.sh@10 -- # set +x 00:03:39.529 ************************************ 00:03:39.529 START TEST denied 00:03:39.529 ************************************ 00:03:39.529 17:13:47 -- common/autotest_common.sh@1104 -- # denied 00:03:39.529 17:13:47 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:39.529 17:13:47 -- setup/acl.sh@38 -- # setup output config 00:03:39.529 17:13:47 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:39.529 17:13:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.529 17:13:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.740 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:43.740 17:13:52 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:43.740 17:13:52 -- setup/acl.sh@28 -- # local dev driver 00:03:43.740 17:13:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:43.740 17:13:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:43.740 17:13:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:43.740 17:13:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:43.740 17:13:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:43.740 17:13:52 -- setup/acl.sh@41 -- # setup reset 00:03:43.740 17:13:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.740 17:13:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.031 00:03:49.031 real 0m9.449s 00:03:49.031 user 0m3.118s 00:03:49.031 sys 0m5.514s 00:03:49.031 17:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.031 17:13:57 -- common/autotest_common.sh@10 -- # set +x 00:03:49.031 ************************************ 00:03:49.031 END TEST denied 00:03:49.031 ************************************ 00:03:49.031 17:13:57 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:49.031 17:13:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:49.031 17:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:49.031 17:13:57 -- common/autotest_common.sh@10 -- # set +x 00:03:49.031 ************************************ 00:03:49.031 START TEST allowed 00:03:49.031 ************************************ 00:03:49.031 17:13:57 -- common/autotest_common.sh@1104 -- # allowed 00:03:49.031 17:13:57 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:49.031 17:13:57 -- setup/acl.sh@45 -- # setup output config 00:03:49.031 17:13:57 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:49.031 17:13:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.031 17:13:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.627 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:55.627 17:14:03 -- setup/acl.sh@47 -- # verify 00:03:55.627 17:14:03 -- setup/acl.sh@28 -- # local dev driver 00:03:55.627 17:14:03 -- setup/acl.sh@48 -- # setup reset 00:03:55.627 17:14:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.627 17:14:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.844 00:03:59.844 real 0m10.191s 00:03:59.844 user 0m3.037s 00:03:59.844 sys 0m5.495s 00:03:59.844 17:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.844 17:14:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.844 ************************************ 00:03:59.844 END TEST allowed 00:03:59.844 ************************************ 00:03:59.844 00:03:59.844 real 0m27.779s 00:03:59.844 user 0m9.138s 00:03:59.844 sys 0m16.399s 00:03:59.844 17:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.844 17:14:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.844 ************************************ 00:03:59.844 END TEST acl 00:03:59.844 ************************************ 00:03:59.844 17:14:07 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.844 17:14:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.844 17:14:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.844 17:14:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.844 ************************************ 00:03:59.844 START TEST hugepages 00:03:59.844 ************************************ 00:03:59.844 17:14:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.844 * Looking for test storage... 00:03:59.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.844 17:14:07 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.844 17:14:07 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.845 17:14:07 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.845 17:14:07 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.845 17:14:07 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.845 17:14:07 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.845 17:14:07 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.845 17:14:07 -- setup/common.sh@18 -- # local node= 00:03:59.845 17:14:07 -- setup/common.sh@19 -- # local var val 00:03:59.845 17:14:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.845 17:14:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.845 17:14:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.845 17:14:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.845 17:14:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.845 17:14:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 104804776 kB' 'MemAvailable: 108291568 kB' 'Buffers: 4136 kB' 'Cached: 12165752 kB' 'SwapCached: 0 kB' 'Active: 8959376 kB' 'Inactive: 3765476 kB' 'Active(anon): 8564168 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558324 kB' 'Mapped: 182892 kB' 'Shmem: 8009204 kB' 'KReclaimable: 300732 kB' 'Slab: 1574712 kB' 'SReclaimable: 300732 kB' 'SUnreclaim: 1273980 kB' 'KernelStack: 27040 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69453824 kB' 'Committed_AS: 9873000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235456 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.845 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.845 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # continue 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.846 17:14:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.846 17:14:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.846 17:14:07 -- setup/common.sh@33 -- # echo 2048 00:03:59.846 17:14:07 -- setup/common.sh@33 -- # return 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.846 17:14:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.846 17:14:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.846 17:14:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.846 17:14:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.846 17:14:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.846 17:14:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.846 17:14:07 -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.846 17:14:07 -- setup/hugepages.sh@27 -- # local node 00:03:59.846 17:14:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.846 17:14:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.846 17:14:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.846 17:14:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.846 17:14:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.846 17:14:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.846 17:14:07 -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.846 17:14:07 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.846 17:14:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.846 17:14:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.846 17:14:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.846 17:14:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.846 17:14:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.846 17:14:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.846 17:14:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.846 17:14:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.846 17:14:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.846 17:14:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.846 17:14:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.846 17:14:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.846 ************************************ 00:03:59.846 START TEST default_setup 00:03:59.846 ************************************ 00:03:59.846 17:14:07 -- common/autotest_common.sh@1104 -- # default_setup 00:03:59.846 17:14:07 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.846 17:14:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.846 17:14:07 -- setup/hugepages.sh@51 -- # shift 00:03:59.846 17:14:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.846 17:14:07 -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.846 17:14:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.846 17:14:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.846 17:14:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.846 17:14:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.846 17:14:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.846 17:14:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.846 17:14:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.846 17:14:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.846 17:14:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.846 17:14:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.846 17:14:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.846 17:14:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.847 17:14:07 -- setup/hugepages.sh@73 -- # return 0 00:03:59.847 17:14:07 -- setup/hugepages.sh@137 -- # setup output 00:03:59.847 17:14:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.847 17:14:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.148 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.148 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.408 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:03.679 17:14:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:03.679 17:14:12 -- setup/hugepages.sh@89 -- # local node 00:04:03.679 17:14:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.679 17:14:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.679 17:14:12 -- setup/hugepages.sh@92 -- # local surp 00:04:03.679 17:14:12 -- setup/hugepages.sh@93 -- # local resv 00:04:03.679 17:14:12 -- setup/hugepages.sh@94 -- # local anon 00:04:03.679 17:14:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.679 17:14:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.679 17:14:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.679 17:14:12 -- setup/common.sh@18 -- # local node= 00:04:03.679 17:14:12 -- setup/common.sh@19 -- # local var val 00:04:03.679 17:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.679 17:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.679 17:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.679 17:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.679 17:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.679 17:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106984160 kB' 'MemAvailable: 110470936 kB' 'Buffers: 4136 kB' 'Cached: 12165876 kB' 'SwapCached: 0 kB' 'Active: 8974712 kB' 'Inactive: 3765476 kB' 'Active(anon): 8579504 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573080 kB' 'Mapped: 183060 kB' 'Shmem: 8009328 kB' 'KReclaimable: 300700 kB' 'Slab: 1573256 kB' 'SReclaimable: 300700 kB' 'SUnreclaim: 1272556 kB' 'KernelStack: 27360 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9889044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235664 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.679 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.679 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.680 17:14:12 -- setup/common.sh@33 -- # echo 0 00:04:03.680 17:14:12 -- setup/common.sh@33 -- # return 0 00:04:03.680 17:14:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.680 17:14:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.680 17:14:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.680 17:14:12 -- setup/common.sh@18 -- # local node= 00:04:03.680 17:14:12 -- setup/common.sh@19 -- # local var val 00:04:03.680 17:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.680 17:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.680 17:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.680 17:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.680 17:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.680 17:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106984504 kB' 'MemAvailable: 110471280 kB' 'Buffers: 4136 kB' 'Cached: 12165880 kB' 'SwapCached: 0 kB' 'Active: 8975392 kB' 'Inactive: 3765476 kB' 'Active(anon): 8580184 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573696 kB' 'Mapped: 183024 kB' 'Shmem: 8009332 kB' 'KReclaimable: 300700 kB' 'Slab: 1572844 kB' 'SReclaimable: 300700 kB' 'SUnreclaim: 1272144 kB' 'KernelStack: 27280 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9906200 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235680 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.680 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.680 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.681 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.681 17:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.682 17:14:12 -- setup/common.sh@33 -- # echo 0 00:04:03.682 17:14:12 -- setup/common.sh@33 -- # return 0 00:04:03.682 17:14:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.682 17:14:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.682 17:14:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.682 17:14:12 -- setup/common.sh@18 -- # local node= 00:04:03.682 17:14:12 -- setup/common.sh@19 -- # local var val 00:04:03.682 17:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.682 17:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.682 17:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.682 17:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.682 17:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.682 17:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106985752 kB' 'MemAvailable: 110472528 kB' 'Buffers: 4136 kB' 'Cached: 12165892 kB' 'SwapCached: 0 kB' 'Active: 8974156 kB' 'Inactive: 3765476 kB' 'Active(anon): 8578948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572896 kB' 'Mapped: 182936 kB' 'Shmem: 8009344 kB' 'KReclaimable: 300700 kB' 'Slab: 1572876 kB' 'SReclaimable: 300700 kB' 'SUnreclaim: 1272176 kB' 'KernelStack: 27200 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9888948 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.682 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.682 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.947 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.947 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.948 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.948 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.948 17:14:12 -- setup/common.sh@33 -- # echo 0 00:04:03.948 17:14:12 -- setup/common.sh@33 -- # return 0 00:04:03.948 17:14:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.948 17:14:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.948 nr_hugepages=1024 00:04:03.948 17:14:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.948 resv_hugepages=0 00:04:03.948 17:14:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.948 surplus_hugepages=0 00:04:03.948 17:14:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.948 anon_hugepages=0 00:04:03.948 17:14:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.948 17:14:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.948 17:14:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.948 17:14:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.948 17:14:12 -- setup/common.sh@18 -- # local node= 00:04:03.948 17:14:12 -- setup/common.sh@19 -- # local var val 00:04:03.948 17:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.948 17:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.948 17:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.948 17:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.948 17:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.948 17:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.949 17:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106984716 kB' 'MemAvailable: 110471492 kB' 'Buffers: 4136 kB' 'Cached: 12165912 kB' 'SwapCached: 0 kB' 'Active: 8974336 kB' 'Inactive: 3765476 kB' 'Active(anon): 8579128 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573096 kB' 'Mapped: 182936 kB' 'Shmem: 8009364 kB' 'KReclaimable: 300700 kB' 'Slab: 1572876 kB' 'SReclaimable: 300700 kB' 'SUnreclaim: 1272176 kB' 'KernelStack: 27264 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9888968 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.949 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.949 17:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.950 17:14:12 -- setup/common.sh@33 -- # echo 1024 00:04:03.950 17:14:12 -- setup/common.sh@33 -- # return 0 00:04:03.950 17:14:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.950 17:14:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.950 17:14:12 -- setup/hugepages.sh@27 -- # local node 00:04:03.950 17:14:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.950 17:14:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.950 17:14:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.950 17:14:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.950 17:14:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.950 17:14:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.950 17:14:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.950 17:14:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.950 17:14:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.950 17:14:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.950 17:14:12 -- setup/common.sh@18 -- # local node=0 00:04:03.950 17:14:12 -- setup/common.sh@19 -- # local var val 00:04:03.950 17:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.950 17:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.950 17:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.950 17:14:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.950 17:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.950 17:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 51753908 kB' 'MemUsed: 13899056 kB' 'SwapCached: 0 kB' 'Active: 6189912 kB' 'Inactive: 3524884 kB' 'Active(anon): 6048360 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9435596 kB' 'Mapped: 95940 kB' 'AnonPages: 282380 kB' 'Shmem: 5769160 kB' 'KernelStack: 14104 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131548 kB' 'Slab: 832404 kB' 'SReclaimable: 131548 kB' 'SUnreclaim: 700856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.950 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.950 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # continue 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.951 17:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.951 17:14:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.951 17:14:12 -- setup/common.sh@33 -- # echo 0 00:04:03.951 17:14:12 -- setup/common.sh@33 -- # return 0 00:04:03.951 17:14:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.951 17:14:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.951 17:14:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.951 17:14:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.951 17:14:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.951 node0=1024 expecting 1024 00:04:03.951 17:14:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.951 00:04:03.951 real 0m4.360s 00:04:03.951 user 0m1.697s 00:04:03.951 sys 0m2.695s 00:04:03.951 17:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.951 17:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:03.951 ************************************ 00:04:03.951 END TEST default_setup 00:04:03.951 ************************************ 00:04:03.951 17:14:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:03.951 17:14:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.951 17:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.951 17:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:03.951 ************************************ 00:04:03.951 START TEST per_node_1G_alloc 00:04:03.951 ************************************ 00:04:03.951 17:14:12 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:03.951 17:14:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:03.951 17:14:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:03.951 17:14:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.951 17:14:12 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:03.951 17:14:12 -- setup/hugepages.sh@51 -- # shift 00:04:03.951 17:14:12 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:03.951 17:14:12 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.951 17:14:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.951 17:14:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.951 17:14:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:03.951 17:14:12 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:03.951 17:14:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.952 17:14:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.952 17:14:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.952 17:14:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.952 17:14:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.952 17:14:12 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:03.952 17:14:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.952 17:14:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.952 17:14:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.952 17:14:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.952 17:14:12 -- setup/hugepages.sh@73 -- # return 0 00:04:03.952 17:14:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:03.952 17:14:12 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:03.952 17:14:12 -- setup/hugepages.sh@146 -- # setup output 00:04:03.952 17:14:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.952 17:14:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.163 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:08.163 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.163 17:14:16 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:08.163 17:14:16 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:08.163 17:14:16 -- setup/hugepages.sh@89 -- # local node 00:04:08.163 17:14:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.163 17:14:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.163 17:14:16 -- setup/hugepages.sh@92 -- # local surp 00:04:08.163 17:14:16 -- setup/hugepages.sh@93 -- # local resv 00:04:08.163 17:14:16 -- setup/hugepages.sh@94 -- # local anon 00:04:08.163 17:14:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.163 17:14:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.163 17:14:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.163 17:14:16 -- setup/common.sh@18 -- # local node= 00:04:08.163 17:14:16 -- setup/common.sh@19 -- # local var val 00:04:08.163 17:14:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.163 17:14:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.163 17:14:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.163 17:14:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.163 17:14:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.163 17:14:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107006920 kB' 'MemAvailable: 110493692 kB' 'Buffers: 4136 kB' 'Cached: 12166040 kB' 'SwapCached: 0 kB' 'Active: 8973680 kB' 'Inactive: 3765476 kB' 'Active(anon): 8578472 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572284 kB' 'Mapped: 181924 kB' 'Shmem: 8009492 kB' 'KReclaimable: 300692 kB' 'Slab: 1572892 kB' 'SReclaimable: 300692 kB' 'SUnreclaim: 1272200 kB' 'KernelStack: 27056 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9877520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.163 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.164 17:14:16 -- setup/common.sh@33 -- # echo 0 00:04:08.164 17:14:16 -- setup/common.sh@33 -- # return 0 00:04:08.164 17:14:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.164 17:14:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.164 17:14:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.164 17:14:16 -- setup/common.sh@18 -- # local node= 00:04:08.164 17:14:16 -- setup/common.sh@19 -- # local var val 00:04:08.164 17:14:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.164 17:14:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.164 17:14:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.164 17:14:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.164 17:14:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.164 17:14:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107005912 kB' 'MemAvailable: 110492684 kB' 'Buffers: 4136 kB' 'Cached: 12166044 kB' 'SwapCached: 0 kB' 'Active: 8973456 kB' 'Inactive: 3765476 kB' 'Active(anon): 8578248 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571936 kB' 'Mapped: 181856 kB' 'Shmem: 8009496 kB' 'KReclaimable: 300692 kB' 'Slab: 1572876 kB' 'SReclaimable: 300692 kB' 'SUnreclaim: 1272184 kB' 'KernelStack: 27040 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9877732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235520 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.164 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.165 17:14:16 -- setup/common.sh@33 -- # echo 0 00:04:08.165 17:14:16 -- setup/common.sh@33 -- # return 0 00:04:08.165 17:14:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.165 17:14:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.165 17:14:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.165 17:14:16 -- setup/common.sh@18 -- # local node= 00:04:08.165 17:14:16 -- setup/common.sh@19 -- # local var val 00:04:08.165 17:14:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.165 17:14:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.165 17:14:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.165 17:14:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.165 17:14:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.165 17:14:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107006136 kB' 'MemAvailable: 110492908 kB' 'Buffers: 4136 kB' 'Cached: 12166052 kB' 'SwapCached: 0 kB' 'Active: 8974132 kB' 'Inactive: 3765476 kB' 'Active(anon): 8578924 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572804 kB' 'Mapped: 182412 kB' 'Shmem: 8009504 kB' 'KReclaimable: 300692 kB' 'Slab: 1572892 kB' 'SReclaimable: 300692 kB' 'SUnreclaim: 1272200 kB' 'KernelStack: 27008 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9879500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235488 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.166 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.167 17:14:16 -- setup/common.sh@33 -- # echo 0 00:04:08.167 17:14:16 -- setup/common.sh@33 -- # return 0 00:04:08.167 17:14:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.167 17:14:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.167 nr_hugepages=1024 00:04:08.167 17:14:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.167 resv_hugepages=0 00:04:08.167 17:14:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.167 surplus_hugepages=0 00:04:08.167 17:14:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.167 anon_hugepages=0 00:04:08.167 17:14:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.167 17:14:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.167 17:14:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.167 17:14:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.167 17:14:16 -- setup/common.sh@18 -- # local node= 00:04:08.167 17:14:16 -- setup/common.sh@19 -- # local var val 00:04:08.167 17:14:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.167 17:14:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.167 17:14:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.167 17:14:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.167 17:14:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.167 17:14:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106999436 kB' 'MemAvailable: 110486208 kB' 'Buffers: 4136 kB' 'Cached: 12166068 kB' 'SwapCached: 0 kB' 'Active: 8978900 kB' 'Inactive: 3765476 kB' 'Active(anon): 8583692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577580 kB' 'Mapped: 182696 kB' 'Shmem: 8009520 kB' 'KReclaimable: 300692 kB' 'Slab: 1572892 kB' 'SReclaimable: 300692 kB' 'SUnreclaim: 1272200 kB' 'KernelStack: 27040 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9883880 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.167 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.168 17:14:16 -- setup/common.sh@33 -- # echo 1024 00:04:08.168 17:14:16 -- setup/common.sh@33 -- # return 0 00:04:08.168 17:14:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.168 17:14:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.168 17:14:16 -- setup/hugepages.sh@27 -- # local node 00:04:08.168 17:14:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.168 17:14:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.168 17:14:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.168 17:14:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.168 17:14:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.168 17:14:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.168 17:14:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.168 17:14:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.168 17:14:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.168 17:14:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.168 17:14:16 -- setup/common.sh@18 -- # local node=0 00:04:08.168 17:14:16 -- setup/common.sh@19 -- # local var val 00:04:08.168 17:14:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.168 17:14:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.168 17:14:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.168 17:14:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.168 17:14:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.168 17:14:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 52815500 kB' 'MemUsed: 12837464 kB' 'SwapCached: 0 kB' 'Active: 6189168 kB' 'Inactive: 3524884 kB' 'Active(anon): 6047616 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9435664 kB' 'Mapped: 95284 kB' 'AnonPages: 281532 kB' 'Shmem: 5769228 kB' 'KernelStack: 13880 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131540 kB' 'Slab: 832580 kB' 'SReclaimable: 131540 kB' 'SUnreclaim: 701040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.168 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.168 17:14:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@33 -- # echo 0 00:04:08.169 17:14:16 -- setup/common.sh@33 -- # return 0 00:04:08.169 17:14:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.169 17:14:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.169 17:14:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.169 17:14:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.169 17:14:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.169 17:14:16 -- setup/common.sh@18 -- # local node=1 00:04:08.169 17:14:16 -- setup/common.sh@19 -- # local var val 00:04:08.169 17:14:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.169 17:14:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.169 17:14:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.169 17:14:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.169 17:14:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.169 17:14:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 54184724 kB' 'MemUsed: 6487056 kB' 'SwapCached: 0 kB' 'Active: 2783788 kB' 'Inactive: 240592 kB' 'Active(anon): 2530132 kB' 'Inactive(anon): 0 kB' 'Active(file): 253656 kB' 'Inactive(file): 240592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2734568 kB' 'Mapped: 86624 kB' 'AnonPages: 290016 kB' 'Shmem: 2240320 kB' 'KernelStack: 13144 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169152 kB' 'Slab: 740312 kB' 'SReclaimable: 169152 kB' 'SUnreclaim: 571160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.169 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.169 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # continue 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.170 17:14:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.170 17:14:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.170 17:14:16 -- setup/common.sh@33 -- # echo 0 00:04:08.170 17:14:16 -- setup/common.sh@33 -- # return 0 00:04:08.170 17:14:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.170 17:14:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.170 17:14:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.170 17:14:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.170 17:14:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.170 node0=512 expecting 512 00:04:08.170 17:14:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.170 17:14:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.170 17:14:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.170 17:14:16 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:08.170 node1=512 expecting 512 00:04:08.170 17:14:16 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.170 00:04:08.170 real 0m4.154s 00:04:08.170 user 0m1.667s 00:04:08.170 sys 0m2.556s 00:04:08.170 17:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.170 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:04:08.170 ************************************ 00:04:08.170 END TEST per_node_1G_alloc 00:04:08.170 ************************************ 00:04:08.170 17:14:16 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:08.170 17:14:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.170 17:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.170 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:04:08.170 ************************************ 00:04:08.170 START TEST even_2G_alloc 00:04:08.170 ************************************ 00:04:08.170 17:14:16 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:08.170 17:14:16 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:08.170 17:14:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.170 17:14:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:08.170 17:14:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.170 17:14:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.170 17:14:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:08.170 17:14:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:08.170 17:14:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.170 17:14:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.171 17:14:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.171 17:14:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.171 17:14:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.171 17:14:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:08.171 17:14:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:08.171 17:14:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.171 17:14:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.171 17:14:16 -- setup/hugepages.sh@83 -- # : 512 00:04:08.171 17:14:16 -- setup/hugepages.sh@84 -- # : 1 00:04:08.171 17:14:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.171 17:14:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.171 17:14:16 -- setup/hugepages.sh@83 -- # : 0 00:04:08.171 17:14:16 -- setup/hugepages.sh@84 -- # : 0 00:04:08.171 17:14:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.171 17:14:16 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:08.171 17:14:16 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:08.171 17:14:16 -- setup/hugepages.sh@153 -- # setup output 00:04:08.171 17:14:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.171 17:14:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.445 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:12.445 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.445 17:14:20 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:12.445 17:14:20 -- setup/hugepages.sh@89 -- # local node 00:04:12.445 17:14:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.445 17:14:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.445 17:14:20 -- setup/hugepages.sh@92 -- # local surp 00:04:12.445 17:14:20 -- setup/hugepages.sh@93 -- # local resv 00:04:12.445 17:14:20 -- setup/hugepages.sh@94 -- # local anon 00:04:12.445 17:14:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.445 17:14:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.445 17:14:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.445 17:14:20 -- setup/common.sh@18 -- # local node= 00:04:12.445 17:14:20 -- setup/common.sh@19 -- # local var val 00:04:12.445 17:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.445 17:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.445 17:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.445 17:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.445 17:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.445 17:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.445 17:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107042140 kB' 'MemAvailable: 110528912 kB' 'Buffers: 4136 kB' 'Cached: 12166188 kB' 'SwapCached: 0 kB' 'Active: 8977400 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582192 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576272 kB' 'Mapped: 181996 kB' 'Shmem: 8009640 kB' 'KReclaimable: 300692 kB' 'Slab: 1572648 kB' 'SReclaimable: 300692 kB' 'SUnreclaim: 1271956 kB' 'KernelStack: 27056 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9881796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235392 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.445 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.445 17:14:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 17:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 17:14:20 -- setup/common.sh@33 -- # echo 0 00:04:12.446 17:14:20 -- setup/common.sh@33 -- # return 0 00:04:12.446 17:14:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.446 17:14:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.446 17:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.446 17:14:20 -- setup/common.sh@18 -- # local node= 00:04:12.446 17:14:20 -- setup/common.sh@19 -- # local var val 00:04:12.446 17:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.446 17:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.446 17:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.446 17:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.446 17:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.446 17:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107040832 kB' 'MemAvailable: 110527592 kB' 'Buffers: 4136 kB' 'Cached: 12166192 kB' 'SwapCached: 0 kB' 'Active: 8976964 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581756 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575796 kB' 'Mapped: 181920 kB' 'Shmem: 8009644 kB' 'KReclaimable: 300668 kB' 'Slab: 1572500 kB' 'SReclaimable: 300668 kB' 'SUnreclaim: 1271832 kB' 'KernelStack: 27168 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9881808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235520 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.447 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.447 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 17:14:20 -- setup/common.sh@33 -- # echo 0 00:04:12.448 17:14:20 -- setup/common.sh@33 -- # return 0 00:04:12.448 17:14:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.448 17:14:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.448 17:14:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.448 17:14:20 -- setup/common.sh@18 -- # local node= 00:04:12.448 17:14:20 -- setup/common.sh@19 -- # local var val 00:04:12.448 17:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.448 17:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.448 17:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.448 17:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.448 17:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.448 17:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107039100 kB' 'MemAvailable: 110525860 kB' 'Buffers: 4136 kB' 'Cached: 12166192 kB' 'SwapCached: 0 kB' 'Active: 8977036 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581828 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575780 kB' 'Mapped: 181920 kB' 'Shmem: 8009644 kB' 'KReclaimable: 300668 kB' 'Slab: 1572488 kB' 'SReclaimable: 300668 kB' 'SUnreclaim: 1271820 kB' 'KernelStack: 27120 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9883468 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235552 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.448 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 17:14:20 -- setup/common.sh@33 -- # echo 0 00:04:12.449 17:14:20 -- setup/common.sh@33 -- # return 0 00:04:12.449 17:14:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.449 17:14:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.449 nr_hugepages=1024 00:04:12.449 17:14:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.449 resv_hugepages=0 00:04:12.449 17:14:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.449 surplus_hugepages=0 00:04:12.449 17:14:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.449 anon_hugepages=0 00:04:12.449 17:14:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.449 17:14:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.449 17:14:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.449 17:14:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.449 17:14:20 -- setup/common.sh@18 -- # local node= 00:04:12.449 17:14:20 -- setup/common.sh@19 -- # local var val 00:04:12.449 17:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.449 17:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.449 17:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.449 17:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.449 17:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.449 17:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107040424 kB' 'MemAvailable: 110527184 kB' 'Buffers: 4136 kB' 'Cached: 12166216 kB' 'SwapCached: 0 kB' 'Active: 8976824 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581616 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575420 kB' 'Mapped: 181920 kB' 'Shmem: 8009668 kB' 'KReclaimable: 300668 kB' 'Slab: 1572436 kB' 'SReclaimable: 300668 kB' 'SUnreclaim: 1271768 kB' 'KernelStack: 27168 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9883112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235568 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 17:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.451 17:14:20 -- setup/common.sh@33 -- # echo 1024 00:04:12.451 17:14:20 -- setup/common.sh@33 -- # return 0 00:04:12.451 17:14:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.451 17:14:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.451 17:14:20 -- setup/hugepages.sh@27 -- # local node 00:04:12.451 17:14:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.451 17:14:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.451 17:14:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.451 17:14:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.451 17:14:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.451 17:14:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.451 17:14:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.451 17:14:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.451 17:14:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.451 17:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.451 17:14:20 -- setup/common.sh@18 -- # local node=0 00:04:12.451 17:14:20 -- setup/common.sh@19 -- # local var val 00:04:12.451 17:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.451 17:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.451 17:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.451 17:14:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.451 17:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.451 17:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 52864076 kB' 'MemUsed: 12788888 kB' 'SwapCached: 0 kB' 'Active: 6191768 kB' 'Inactive: 3524884 kB' 'Active(anon): 6050216 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9435780 kB' 'Mapped: 95296 kB' 'AnonPages: 284204 kB' 'Shmem: 5769344 kB' 'KernelStack: 13896 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131516 kB' 'Slab: 832520 kB' 'SReclaimable: 131516 kB' 'SUnreclaim: 701004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@33 -- # echo 0 00:04:12.452 17:14:20 -- setup/common.sh@33 -- # return 0 00:04:12.452 17:14:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.452 17:14:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.452 17:14:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.452 17:14:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.452 17:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.452 17:14:20 -- setup/common.sh@18 -- # local node=1 00:04:12.452 17:14:20 -- setup/common.sh@19 -- # local var val 00:04:12.452 17:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.452 17:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.452 17:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.452 17:14:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.452 17:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.452 17:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 54175984 kB' 'MemUsed: 6495796 kB' 'SwapCached: 0 kB' 'Active: 2785136 kB' 'Inactive: 240592 kB' 'Active(anon): 2531480 kB' 'Inactive(anon): 0 kB' 'Active(file): 253656 kB' 'Inactive(file): 240592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2734588 kB' 'Mapped: 86624 kB' 'AnonPages: 291332 kB' 'Shmem: 2240340 kB' 'KernelStack: 13224 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169152 kB' 'Slab: 740076 kB' 'SReclaimable: 169152 kB' 'SUnreclaim: 570924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 17:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # continue 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 17:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 17:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.453 17:14:20 -- setup/common.sh@33 -- # echo 0 00:04:12.453 17:14:20 -- setup/common.sh@33 -- # return 0 00:04:12.453 17:14:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.453 17:14:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.453 17:14:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.453 17:14:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.453 node0=512 expecting 512 00:04:12.453 17:14:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.453 17:14:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.453 17:14:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.453 17:14:20 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:12.453 node1=512 expecting 512 00:04:12.453 17:14:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.453 00:04:12.453 real 0m4.120s 00:04:12.453 user 0m1.638s 00:04:12.453 sys 0m2.551s 00:04:12.453 17:14:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.453 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:04:12.453 ************************************ 00:04:12.453 END TEST even_2G_alloc 00:04:12.453 ************************************ 00:04:12.453 17:14:20 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:12.453 17:14:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.453 17:14:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.453 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:04:12.453 ************************************ 00:04:12.453 START TEST odd_alloc 00:04:12.453 ************************************ 00:04:12.453 17:14:20 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:12.453 17:14:20 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:12.453 17:14:20 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:12.453 17:14:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:12.453 17:14:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.453 17:14:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.453 17:14:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.453 17:14:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:12.453 17:14:20 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.453 17:14:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.453 17:14:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.453 17:14:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:12.453 17:14:20 -- setup/hugepages.sh@83 -- # : 513 00:04:12.453 17:14:20 -- setup/hugepages.sh@84 -- # : 1 00:04:12.453 17:14:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:12.453 17:14:20 -- setup/hugepages.sh@83 -- # : 0 00:04:12.453 17:14:20 -- setup/hugepages.sh@84 -- # : 0 00:04:12.453 17:14:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.453 17:14:20 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:12.453 17:14:20 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:12.453 17:14:20 -- setup/hugepages.sh@160 -- # setup output 00:04:12.453 17:14:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.453 17:14:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.815 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:15.815 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.815 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:16.388 17:14:24 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:16.388 17:14:24 -- setup/hugepages.sh@89 -- # local node 00:04:16.388 17:14:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.388 17:14:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.388 17:14:24 -- setup/hugepages.sh@92 -- # local surp 00:04:16.388 17:14:24 -- setup/hugepages.sh@93 -- # local resv 00:04:16.388 17:14:24 -- setup/hugepages.sh@94 -- # local anon 00:04:16.388 17:14:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.388 17:14:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.388 17:14:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.388 17:14:24 -- setup/common.sh@18 -- # local node= 00:04:16.388 17:14:24 -- setup/common.sh@19 -- # local var val 00:04:16.388 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.389 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.389 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.389 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.389 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.389 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107060548 kB' 'MemAvailable: 110547292 kB' 'Buffers: 4136 kB' 'Cached: 12166352 kB' 'SwapCached: 0 kB' 'Active: 8975704 kB' 'Inactive: 3765476 kB' 'Active(anon): 8580496 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573980 kB' 'Mapped: 181968 kB' 'Shmem: 8009804 kB' 'KReclaimable: 300636 kB' 'Slab: 1572896 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272260 kB' 'KernelStack: 27120 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9879592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.389 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.389 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.390 17:14:24 -- setup/common.sh@33 -- # echo 0 00:04:16.390 17:14:24 -- setup/common.sh@33 -- # return 0 00:04:16.390 17:14:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:16.390 17:14:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.390 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.390 17:14:24 -- setup/common.sh@18 -- # local node= 00:04:16.390 17:14:24 -- setup/common.sh@19 -- # local var val 00:04:16.390 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.390 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.390 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.390 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.390 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.390 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107060600 kB' 'MemAvailable: 110547344 kB' 'Buffers: 4136 kB' 'Cached: 12166356 kB' 'SwapCached: 0 kB' 'Active: 8975336 kB' 'Inactive: 3765476 kB' 'Active(anon): 8580128 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573636 kB' 'Mapped: 181932 kB' 'Shmem: 8009808 kB' 'KReclaimable: 300636 kB' 'Slab: 1572928 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272292 kB' 'KernelStack: 27120 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9879604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.390 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.390 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.391 17:14:24 -- setup/common.sh@33 -- # echo 0 00:04:16.391 17:14:24 -- setup/common.sh@33 -- # return 0 00:04:16.391 17:14:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:16.391 17:14:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.391 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.391 17:14:24 -- setup/common.sh@18 -- # local node= 00:04:16.391 17:14:24 -- setup/common.sh@19 -- # local var val 00:04:16.391 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.391 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.391 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.391 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.391 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.391 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.391 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107060600 kB' 'MemAvailable: 110547344 kB' 'Buffers: 4136 kB' 'Cached: 12166368 kB' 'SwapCached: 0 kB' 'Active: 8975360 kB' 'Inactive: 3765476 kB' 'Active(anon): 8580152 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573632 kB' 'Mapped: 181932 kB' 'Shmem: 8009820 kB' 'KReclaimable: 300636 kB' 'Slab: 1572928 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272292 kB' 'KernelStack: 27120 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9879620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.391 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.391 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.392 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.392 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.393 17:14:24 -- setup/common.sh@33 -- # echo 0 00:04:16.393 17:14:24 -- setup/common.sh@33 -- # return 0 00:04:16.393 17:14:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:16.393 17:14:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:16.393 nr_hugepages=1025 00:04:16.393 17:14:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.393 resv_hugepages=0 00:04:16.393 17:14:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.393 surplus_hugepages=0 00:04:16.393 17:14:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.393 anon_hugepages=0 00:04:16.393 17:14:24 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.393 17:14:24 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:16.393 17:14:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.393 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.393 17:14:24 -- setup/common.sh@18 -- # local node= 00:04:16.393 17:14:24 -- setup/common.sh@19 -- # local var val 00:04:16.393 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.393 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.393 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.393 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.393 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.393 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.393 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107061976 kB' 'MemAvailable: 110548720 kB' 'Buffers: 4136 kB' 'Cached: 12166392 kB' 'SwapCached: 0 kB' 'Active: 8974816 kB' 'Inactive: 3765476 kB' 'Active(anon): 8579608 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573060 kB' 'Mapped: 181932 kB' 'Shmem: 8009844 kB' 'KReclaimable: 300636 kB' 'Slab: 1572928 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272292 kB' 'KernelStack: 27024 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9879636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.393 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.393 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.394 17:14:24 -- setup/common.sh@33 -- # echo 1025 00:04:16.394 17:14:24 -- setup/common.sh@33 -- # return 0 00:04:16.394 17:14:24 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.394 17:14:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.394 17:14:24 -- setup/hugepages.sh@27 -- # local node 00:04:16.394 17:14:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.394 17:14:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.394 17:14:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.394 17:14:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:16.394 17:14:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.394 17:14:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.394 17:14:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.394 17:14:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.394 17:14:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.394 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.394 17:14:24 -- setup/common.sh@18 -- # local node=0 00:04:16.394 17:14:24 -- setup/common.sh@19 -- # local var val 00:04:16.394 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.394 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.394 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.394 17:14:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.394 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.394 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.394 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.394 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 52902880 kB' 'MemUsed: 12750084 kB' 'SwapCached: 0 kB' 'Active: 6191064 kB' 'Inactive: 3524884 kB' 'Active(anon): 6049512 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9435936 kB' 'Mapped: 95308 kB' 'AnonPages: 283180 kB' 'Shmem: 5769500 kB' 'KernelStack: 13880 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 832688 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 701196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.395 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.395 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@33 -- # echo 0 00:04:16.396 17:14:24 -- setup/common.sh@33 -- # return 0 00:04:16.396 17:14:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.396 17:14:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.396 17:14:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.396 17:14:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:16.396 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.396 17:14:24 -- setup/common.sh@18 -- # local node=1 00:04:16.396 17:14:24 -- setup/common.sh@19 -- # local var val 00:04:16.396 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.396 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.396 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.396 17:14:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.396 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.396 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 54159096 kB' 'MemUsed: 6512684 kB' 'SwapCached: 0 kB' 'Active: 2783776 kB' 'Inactive: 240592 kB' 'Active(anon): 2530120 kB' 'Inactive(anon): 0 kB' 'Active(file): 253656 kB' 'Inactive(file): 240592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2734608 kB' 'Mapped: 86624 kB' 'AnonPages: 289880 kB' 'Shmem: 2240360 kB' 'KernelStack: 13144 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169144 kB' 'Slab: 740240 kB' 'SReclaimable: 169144 kB' 'SUnreclaim: 571096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.396 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.396 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # continue 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.397 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.397 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.397 17:14:24 -- setup/common.sh@33 -- # echo 0 00:04:16.397 17:14:24 -- setup/common.sh@33 -- # return 0 00:04:16.397 17:14:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.397 17:14:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.397 17:14:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.397 17:14:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.397 17:14:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:16.397 node0=512 expecting 513 00:04:16.397 17:14:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.397 17:14:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.397 17:14:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.397 17:14:24 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:16.397 node1=513 expecting 512 00:04:16.397 17:14:24 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:16.397 00:04:16.397 real 0m4.168s 00:04:16.397 user 0m1.595s 00:04:16.397 sys 0m2.644s 00:04:16.397 17:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.397 17:14:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.397 ************************************ 00:04:16.397 END TEST odd_alloc 00:04:16.397 ************************************ 00:04:16.657 17:14:24 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:16.657 17:14:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.657 17:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.657 17:14:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.657 ************************************ 00:04:16.657 START TEST custom_alloc 00:04:16.657 ************************************ 00:04:16.657 17:14:24 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:16.657 17:14:24 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:16.657 17:14:24 -- setup/hugepages.sh@169 -- # local node 00:04:16.657 17:14:24 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:16.657 17:14:24 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:16.657 17:14:24 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:16.657 17:14:24 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:16.657 17:14:24 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.657 17:14:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.657 17:14:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.657 17:14:24 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.657 17:14:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.657 17:14:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.657 17:14:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.657 17:14:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.657 17:14:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.657 17:14:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.657 17:14:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.657 17:14:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.657 17:14:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.657 17:14:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.657 17:14:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.657 17:14:24 -- setup/hugepages.sh@83 -- # : 256 00:04:16.657 17:14:24 -- setup/hugepages.sh@84 -- # : 1 00:04:16.657 17:14:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.657 17:14:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.657 17:14:24 -- setup/hugepages.sh@83 -- # : 0 00:04:16.657 17:14:24 -- setup/hugepages.sh@84 -- # : 0 00:04:16.658 17:14:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:16.658 17:14:24 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:16.658 17:14:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.658 17:14:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.658 17:14:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.658 17:14:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.658 17:14:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.658 17:14:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.658 17:14:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.658 17:14:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.658 17:14:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.658 17:14:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.658 17:14:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.658 17:14:24 -- setup/hugepages.sh@78 -- # return 0 00:04:16.658 17:14:24 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:16.658 17:14:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.658 17:14:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.658 17:14:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.658 17:14:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.658 17:14:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:16.658 17:14:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.658 17:14:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.658 17:14:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.658 17:14:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.658 17:14:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.658 17:14:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.658 17:14:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:16.658 17:14:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.658 17:14:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.658 17:14:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.658 17:14:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:16.658 17:14:24 -- setup/hugepages.sh@78 -- # return 0 00:04:16.658 17:14:24 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:16.658 17:14:24 -- setup/hugepages.sh@187 -- # setup output 00:04:16.658 17:14:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.658 17:14:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.955 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:20.216 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.216 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.480 17:14:28 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:20.480 17:14:28 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:20.480 17:14:28 -- setup/hugepages.sh@89 -- # local node 00:04:20.480 17:14:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.480 17:14:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.480 17:14:28 -- setup/hugepages.sh@92 -- # local surp 00:04:20.480 17:14:28 -- setup/hugepages.sh@93 -- # local resv 00:04:20.480 17:14:28 -- setup/hugepages.sh@94 -- # local anon 00:04:20.480 17:14:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.480 17:14:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.480 17:14:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.480 17:14:28 -- setup/common.sh@18 -- # local node= 00:04:20.480 17:14:28 -- setup/common.sh@19 -- # local var val 00:04:20.480 17:14:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.480 17:14:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.480 17:14:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.480 17:14:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.480 17:14:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.480 17:14:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106023484 kB' 'MemAvailable: 109510228 kB' 'Buffers: 4136 kB' 'Cached: 12166504 kB' 'SwapCached: 0 kB' 'Active: 8976580 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581372 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574708 kB' 'Mapped: 182092 kB' 'Shmem: 8009956 kB' 'KReclaimable: 300636 kB' 'Slab: 1572820 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272184 kB' 'KernelStack: 27008 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9880388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.480 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.480 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.481 17:14:28 -- setup/common.sh@33 -- # echo 0 00:04:20.481 17:14:28 -- setup/common.sh@33 -- # return 0 00:04:20.481 17:14:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.481 17:14:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.481 17:14:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.481 17:14:28 -- setup/common.sh@18 -- # local node= 00:04:20.481 17:14:28 -- setup/common.sh@19 -- # local var val 00:04:20.481 17:14:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.481 17:14:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.481 17:14:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.481 17:14:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.481 17:14:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.481 17:14:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106024228 kB' 'MemAvailable: 109510972 kB' 'Buffers: 4136 kB' 'Cached: 12166508 kB' 'SwapCached: 0 kB' 'Active: 8976404 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581196 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574520 kB' 'Mapped: 182008 kB' 'Shmem: 8009960 kB' 'KReclaimable: 300636 kB' 'Slab: 1572804 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272168 kB' 'KernelStack: 26992 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9880400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.481 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.481 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.482 17:14:28 -- setup/common.sh@33 -- # echo 0 00:04:20.482 17:14:28 -- setup/common.sh@33 -- # return 0 00:04:20.482 17:14:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:20.482 17:14:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.482 17:14:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.482 17:14:28 -- setup/common.sh@18 -- # local node= 00:04:20.482 17:14:28 -- setup/common.sh@19 -- # local var val 00:04:20.482 17:14:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.482 17:14:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.482 17:14:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.482 17:14:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.482 17:14:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.482 17:14:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106027384 kB' 'MemAvailable: 109514128 kB' 'Buffers: 4136 kB' 'Cached: 12166520 kB' 'SwapCached: 0 kB' 'Active: 8977008 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581800 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575172 kB' 'Mapped: 182008 kB' 'Shmem: 8009972 kB' 'KReclaimable: 300636 kB' 'Slab: 1572804 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272168 kB' 'KernelStack: 26992 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9880416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.482 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.482 17:14:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.483 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.483 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.484 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.484 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.484 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.484 17:14:28 -- setup/common.sh@32 -- # continue 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.484 17:14:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.484 17:14:28 -- setup/common.sh@33 -- # echo 0 00:04:20.484 17:14:28 -- setup/common.sh@33 -- # return 0 00:04:20.484 17:14:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:20.484 17:14:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:20.484 nr_hugepages=1536 00:04:20.484 17:14:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.484 resv_hugepages=0 00:04:20.484 17:14:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.484 surplus_hugepages=0 00:04:20.484 17:14:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.484 anon_hugepages=0 00:04:20.484 17:14:28 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:20.484 17:14:28 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:20.484 17:14:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.484 17:14:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.484 17:14:28 -- setup/common.sh@18 -- # local node= 00:04:20.484 17:14:28 -- setup/common.sh@19 -- # local var val 00:04:20.484 17:14:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.484 17:14:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.484 17:14:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.484 17:14:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.484 17:14:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.484 17:14:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.484 17:14:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.484 17:14:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 106027384 kB' 'MemAvailable: 109514128 kB' 'Buffers: 4136 kB' 'Cached: 12166544 kB' 'SwapCached: 0 kB' 'Active: 8976680 kB' 'Inactive: 3765476 kB' 'Active(anon): 8581472 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574784 kB' 'Mapped: 182008 kB' 'Shmem: 8009996 kB' 'KReclaimable: 300636 kB' 'Slab: 1572804 kB' 'SReclaimable: 300636 kB' 'SUnreclaim: 1272168 kB' 'KernelStack: 26976 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9880432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:20.484 17:14:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.484 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.484 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.484 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.747 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.747 17:14:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.748 17:14:29 -- setup/common.sh@33 -- # echo 1536 00:04:20.748 17:14:29 -- setup/common.sh@33 -- # return 0 00:04:20.748 17:14:29 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:20.748 17:14:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.748 17:14:29 -- setup/hugepages.sh@27 -- # local node 00:04:20.748 17:14:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.748 17:14:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.748 17:14:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.748 17:14:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.748 17:14:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.748 17:14:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.748 17:14:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.748 17:14:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.748 17:14:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.748 17:14:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.748 17:14:29 -- setup/common.sh@18 -- # local node=0 00:04:20.748 17:14:29 -- setup/common.sh@19 -- # local var val 00:04:20.748 17:14:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.748 17:14:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.748 17:14:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.748 17:14:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.748 17:14:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.748 17:14:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 52923364 kB' 'MemUsed: 12729600 kB' 'SwapCached: 0 kB' 'Active: 6192948 kB' 'Inactive: 3524884 kB' 'Active(anon): 6051396 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9436016 kB' 'Mapped: 95384 kB' 'AnonPages: 285136 kB' 'Shmem: 5769580 kB' 'KernelStack: 13912 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 832740 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 701248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.748 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.748 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@33 -- # echo 0 00:04:20.749 17:14:29 -- setup/common.sh@33 -- # return 0 00:04:20.749 17:14:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.749 17:14:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.749 17:14:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.749 17:14:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.749 17:14:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.749 17:14:29 -- setup/common.sh@18 -- # local node=1 00:04:20.749 17:14:29 -- setup/common.sh@19 -- # local var val 00:04:20.749 17:14:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.749 17:14:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.749 17:14:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.749 17:14:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.749 17:14:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.749 17:14:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 53105760 kB' 'MemUsed: 7566020 kB' 'SwapCached: 0 kB' 'Active: 2784736 kB' 'Inactive: 240592 kB' 'Active(anon): 2531080 kB' 'Inactive(anon): 0 kB' 'Active(file): 253656 kB' 'Inactive(file): 240592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2734668 kB' 'Mapped: 86624 kB' 'AnonPages: 290804 kB' 'Shmem: 2240420 kB' 'KernelStack: 13112 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169144 kB' 'Slab: 740080 kB' 'SReclaimable: 169144 kB' 'SUnreclaim: 570936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.749 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.749 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # continue 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.750 17:14:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.750 17:14:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.750 17:14:29 -- setup/common.sh@33 -- # echo 0 00:04:20.750 17:14:29 -- setup/common.sh@33 -- # return 0 00:04:20.750 17:14:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.750 17:14:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.750 17:14:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.750 17:14:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.750 17:14:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:20.750 node0=512 expecting 512 00:04:20.750 17:14:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.750 17:14:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.750 17:14:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.750 17:14:29 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:20.750 node1=1024 expecting 1024 00:04:20.750 17:14:29 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:20.750 00:04:20.750 real 0m4.178s 00:04:20.750 user 0m1.632s 00:04:20.750 sys 0m2.618s 00:04:20.750 17:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.750 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.750 ************************************ 00:04:20.750 END TEST custom_alloc 00:04:20.750 ************************************ 00:04:20.750 17:14:29 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:20.750 17:14:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.750 17:14:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.750 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.750 ************************************ 00:04:20.750 START TEST no_shrink_alloc 00:04:20.750 ************************************ 00:04:20.750 17:14:29 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:20.750 17:14:29 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:20.750 17:14:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.750 17:14:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.750 17:14:29 -- setup/hugepages.sh@51 -- # shift 00:04:20.750 17:14:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:20.750 17:14:29 -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.750 17:14:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.750 17:14:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.750 17:14:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.750 17:14:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:20.750 17:14:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.750 17:14:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.750 17:14:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.750 17:14:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.750 17:14:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.750 17:14:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.750 17:14:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.750 17:14:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:20.750 17:14:29 -- setup/hugepages.sh@73 -- # return 0 00:04:20.750 17:14:29 -- setup/hugepages.sh@198 -- # setup output 00:04:20.750 17:14:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.750 17:14:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.958 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:24.958 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:24.958 17:14:33 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:24.958 17:14:33 -- setup/hugepages.sh@89 -- # local node 00:04:24.958 17:14:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.958 17:14:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.958 17:14:33 -- setup/hugepages.sh@92 -- # local surp 00:04:24.958 17:14:33 -- setup/hugepages.sh@93 -- # local resv 00:04:24.958 17:14:33 -- setup/hugepages.sh@94 -- # local anon 00:04:24.958 17:14:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.958 17:14:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.958 17:14:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.958 17:14:33 -- setup/common.sh@18 -- # local node= 00:04:24.958 17:14:33 -- setup/common.sh@19 -- # local var val 00:04:24.958 17:14:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.958 17:14:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.958 17:14:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.958 17:14:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.958 17:14:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.958 17:14:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107051524 kB' 'MemAvailable: 110538236 kB' 'Buffers: 4136 kB' 'Cached: 12166664 kB' 'SwapCached: 0 kB' 'Active: 8977572 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582364 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575660 kB' 'Mapped: 182064 kB' 'Shmem: 8010116 kB' 'KReclaimable: 300572 kB' 'Slab: 1573216 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1272644 kB' 'KernelStack: 27056 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9886408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235632 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.958 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.959 17:14:33 -- setup/common.sh@33 -- # echo 0 00:04:24.959 17:14:33 -- setup/common.sh@33 -- # return 0 00:04:24.959 17:14:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.959 17:14:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.959 17:14:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.959 17:14:33 -- setup/common.sh@18 -- # local node= 00:04:24.959 17:14:33 -- setup/common.sh@19 -- # local var val 00:04:24.959 17:14:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.959 17:14:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.959 17:14:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.959 17:14:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.959 17:14:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.959 17:14:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107051844 kB' 'MemAvailable: 110538556 kB' 'Buffers: 4136 kB' 'Cached: 12166668 kB' 'SwapCached: 0 kB' 'Active: 8978012 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582804 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576100 kB' 'Mapped: 182052 kB' 'Shmem: 8010120 kB' 'KReclaimable: 300572 kB' 'Slab: 1573040 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1272468 kB' 'KernelStack: 27104 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9884772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.960 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.960 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.961 17:14:33 -- setup/common.sh@33 -- # echo 0 00:04:24.961 17:14:33 -- setup/common.sh@33 -- # return 0 00:04:24.961 17:14:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.961 17:14:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.961 17:14:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.961 17:14:33 -- setup/common.sh@18 -- # local node= 00:04:24.961 17:14:33 -- setup/common.sh@19 -- # local var val 00:04:24.961 17:14:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.961 17:14:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.961 17:14:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.961 17:14:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.961 17:14:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.961 17:14:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107050380 kB' 'MemAvailable: 110537092 kB' 'Buffers: 4136 kB' 'Cached: 12166680 kB' 'SwapCached: 0 kB' 'Active: 8977812 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582604 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575844 kB' 'Mapped: 181976 kB' 'Shmem: 8010132 kB' 'KReclaimable: 300572 kB' 'Slab: 1572992 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1272420 kB' 'KernelStack: 27264 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9886432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235664 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.961 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.961 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.962 17:14:33 -- setup/common.sh@33 -- # echo 0 00:04:24.962 17:14:33 -- setup/common.sh@33 -- # return 0 00:04:24.962 17:14:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.962 17:14:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.962 nr_hugepages=1024 00:04:24.962 17:14:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.962 resv_hugepages=0 00:04:24.962 17:14:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.962 surplus_hugepages=0 00:04:24.962 17:14:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.962 anon_hugepages=0 00:04:24.962 17:14:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.962 17:14:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.962 17:14:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.962 17:14:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.962 17:14:33 -- setup/common.sh@18 -- # local node= 00:04:24.962 17:14:33 -- setup/common.sh@19 -- # local var val 00:04:24.962 17:14:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.962 17:14:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.962 17:14:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.962 17:14:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.962 17:14:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.962 17:14:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107050484 kB' 'MemAvailable: 110537196 kB' 'Buffers: 4136 kB' 'Cached: 12166680 kB' 'SwapCached: 0 kB' 'Active: 8977876 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582668 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575912 kB' 'Mapped: 181976 kB' 'Shmem: 8010132 kB' 'KReclaimable: 300572 kB' 'Slab: 1572992 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1272420 kB' 'KernelStack: 27216 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9886448 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235648 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.962 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.962 17:14:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.963 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.963 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.964 17:14:33 -- setup/common.sh@33 -- # echo 1024 00:04:24.964 17:14:33 -- setup/common.sh@33 -- # return 0 00:04:24.964 17:14:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.964 17:14:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.964 17:14:33 -- setup/hugepages.sh@27 -- # local node 00:04:24.964 17:14:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.964 17:14:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.964 17:14:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.964 17:14:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.964 17:14:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.964 17:14:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.964 17:14:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.964 17:14:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.964 17:14:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.964 17:14:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.964 17:14:33 -- setup/common.sh@18 -- # local node=0 00:04:24.964 17:14:33 -- setup/common.sh@19 -- # local var val 00:04:24.964 17:14:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.964 17:14:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.964 17:14:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.964 17:14:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.964 17:14:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.964 17:14:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 51871104 kB' 'MemUsed: 13781860 kB' 'SwapCached: 0 kB' 'Active: 6192092 kB' 'Inactive: 3524884 kB' 'Active(anon): 6050540 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9436060 kB' 'Mapped: 95352 kB' 'AnonPages: 284088 kB' 'Shmem: 5769624 kB' 'KernelStack: 14040 kB' 'PageTables: 4888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 832916 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 701424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.964 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.964 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # continue 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.965 17:14:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.965 17:14:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.965 17:14:33 -- setup/common.sh@33 -- # echo 0 00:04:24.965 17:14:33 -- setup/common.sh@33 -- # return 0 00:04:24.965 17:14:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.965 17:14:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.965 17:14:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.965 17:14:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.965 17:14:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.965 node0=1024 expecting 1024 00:04:24.965 17:14:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.965 17:14:33 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:24.965 17:14:33 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:24.965 17:14:33 -- setup/hugepages.sh@202 -- # setup output 00:04:24.965 17:14:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.965 17:14:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.268 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:28.268 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:28.268 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:28.268 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:28.268 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:28.268 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:28.529 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:28.529 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:28.529 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:28.529 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:28.529 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:28.530 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:28.530 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:28.530 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:28.530 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:28.530 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:28.530 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:28.795 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:28.795 17:14:37 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:28.795 17:14:37 -- setup/hugepages.sh@89 -- # local node 00:04:28.795 17:14:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.795 17:14:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.795 17:14:37 -- setup/hugepages.sh@92 -- # local surp 00:04:28.795 17:14:37 -- setup/hugepages.sh@93 -- # local resv 00:04:28.795 17:14:37 -- setup/hugepages.sh@94 -- # local anon 00:04:28.795 17:14:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.795 17:14:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.795 17:14:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.795 17:14:37 -- setup/common.sh@18 -- # local node= 00:04:28.795 17:14:37 -- setup/common.sh@19 -- # local var val 00:04:28.795 17:14:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.795 17:14:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.795 17:14:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.795 17:14:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.795 17:14:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.795 17:14:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107067916 kB' 'MemAvailable: 110554628 kB' 'Buffers: 4136 kB' 'Cached: 12166800 kB' 'SwapCached: 0 kB' 'Active: 8978532 kB' 'Inactive: 3765476 kB' 'Active(anon): 8583324 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576376 kB' 'Mapped: 182020 kB' 'Shmem: 8010252 kB' 'KReclaimable: 300572 kB' 'Slab: 1572488 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1271916 kB' 'KernelStack: 27056 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9882260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235568 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.795 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.795 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.796 17:14:37 -- setup/common.sh@33 -- # echo 0 00:04:28.796 17:14:37 -- setup/common.sh@33 -- # return 0 00:04:28.796 17:14:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.796 17:14:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.796 17:14:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.796 17:14:37 -- setup/common.sh@18 -- # local node= 00:04:28.796 17:14:37 -- setup/common.sh@19 -- # local var val 00:04:28.796 17:14:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.796 17:14:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.796 17:14:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.796 17:14:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.796 17:14:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.796 17:14:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107070332 kB' 'MemAvailable: 110557044 kB' 'Buffers: 4136 kB' 'Cached: 12166800 kB' 'SwapCached: 0 kB' 'Active: 8977900 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575728 kB' 'Mapped: 181996 kB' 'Shmem: 8010252 kB' 'KReclaimable: 300572 kB' 'Slab: 1572484 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1271912 kB' 'KernelStack: 27040 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9882268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235568 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.796 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.796 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.797 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.797 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.798 17:14:37 -- setup/common.sh@33 -- # echo 0 00:04:28.798 17:14:37 -- setup/common.sh@33 -- # return 0 00:04:28.798 17:14:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.798 17:14:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.798 17:14:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.798 17:14:37 -- setup/common.sh@18 -- # local node= 00:04:28.798 17:14:37 -- setup/common.sh@19 -- # local var val 00:04:28.798 17:14:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.798 17:14:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.798 17:14:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.798 17:14:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.798 17:14:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.798 17:14:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107070056 kB' 'MemAvailable: 110556768 kB' 'Buffers: 4136 kB' 'Cached: 12166812 kB' 'SwapCached: 0 kB' 'Active: 8977892 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582684 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575700 kB' 'Mapped: 181996 kB' 'Shmem: 8010264 kB' 'KReclaimable: 300572 kB' 'Slab: 1572548 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1271976 kB' 'KernelStack: 27040 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9882284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235568 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.798 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.798 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.799 17:14:37 -- setup/common.sh@33 -- # echo 0 00:04:28.799 17:14:37 -- setup/common.sh@33 -- # return 0 00:04:28.799 17:14:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.799 17:14:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.799 nr_hugepages=1024 00:04:28.799 17:14:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.799 resv_hugepages=0 00:04:28.799 17:14:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.799 surplus_hugepages=0 00:04:28.799 17:14:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.799 anon_hugepages=0 00:04:28.799 17:14:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.799 17:14:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.799 17:14:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.799 17:14:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.799 17:14:37 -- setup/common.sh@18 -- # local node= 00:04:28.799 17:14:37 -- setup/common.sh@19 -- # local var val 00:04:28.799 17:14:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.799 17:14:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.799 17:14:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.799 17:14:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.799 17:14:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.799 17:14:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324744 kB' 'MemFree: 107069300 kB' 'MemAvailable: 110556012 kB' 'Buffers: 4136 kB' 'Cached: 12166840 kB' 'SwapCached: 0 kB' 'Active: 8977568 kB' 'Inactive: 3765476 kB' 'Active(anon): 8582360 kB' 'Inactive(anon): 0 kB' 'Active(file): 395208 kB' 'Inactive(file): 3765476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575312 kB' 'Mapped: 181996 kB' 'Shmem: 8010292 kB' 'KReclaimable: 300572 kB' 'Slab: 1572548 kB' 'SReclaimable: 300572 kB' 'SUnreclaim: 1271976 kB' 'KernelStack: 27024 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9882300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 122112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3802484 kB' 'DirectMap2M: 16848896 kB' 'DirectMap1G: 116391936 kB' 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.799 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.799 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.800 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.800 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.800 17:14:37 -- setup/common.sh@33 -- # echo 1024 00:04:28.800 17:14:37 -- setup/common.sh@33 -- # return 0 00:04:28.800 17:14:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.800 17:14:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.800 17:14:37 -- setup/hugepages.sh@27 -- # local node 00:04:28.800 17:14:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.800 17:14:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.800 17:14:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.800 17:14:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:28.800 17:14:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.800 17:14:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.800 17:14:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.800 17:14:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.800 17:14:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.800 17:14:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.800 17:14:37 -- setup/common.sh@18 -- # local node=0 00:04:28.800 17:14:37 -- setup/common.sh@19 -- # local var val 00:04:28.800 17:14:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.800 17:14:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.800 17:14:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.801 17:14:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.801 17:14:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.801 17:14:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 51886396 kB' 'MemUsed: 13766568 kB' 'SwapCached: 0 kB' 'Active: 6192212 kB' 'Inactive: 3524884 kB' 'Active(anon): 6050660 kB' 'Inactive(anon): 0 kB' 'Active(file): 141552 kB' 'Inactive(file): 3524884 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9436120 kB' 'Mapped: 95372 kB' 'AnonPages: 284052 kB' 'Shmem: 5769684 kB' 'KernelStack: 13896 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 832656 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 701164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.801 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.801 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # continue 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 17:14:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 17:14:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.062 17:14:37 -- setup/common.sh@33 -- # echo 0 00:04:29.062 17:14:37 -- setup/common.sh@33 -- # return 0 00:04:29.062 17:14:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.062 17:14:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.062 17:14:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.062 17:14:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.062 17:14:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.062 node0=1024 expecting 1024 00:04:29.062 17:14:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.062 00:04:29.062 real 0m8.185s 00:04:29.062 user 0m3.209s 00:04:29.062 sys 0m5.120s 00:04:29.062 17:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.062 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.062 ************************************ 00:04:29.062 END TEST no_shrink_alloc 00:04:29.062 ************************************ 00:04:29.062 17:14:37 -- setup/hugepages.sh@217 -- # clear_hp 00:04:29.062 17:14:37 -- setup/hugepages.sh@37 -- # local node hp 00:04:29.062 17:14:37 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.062 17:14:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.062 17:14:37 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.062 17:14:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.062 17:14:37 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.062 17:14:37 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.062 17:14:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.062 17:14:37 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.062 17:14:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.062 17:14:37 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.062 17:14:37 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.062 17:14:37 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.062 00:04:29.062 real 0m29.624s 00:04:29.062 user 0m11.602s 00:04:29.062 sys 0m18.536s 00:04:29.062 17:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.062 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.062 ************************************ 00:04:29.062 END TEST hugepages 00:04:29.062 ************************************ 00:04:29.062 17:14:37 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:29.062 17:14:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.062 17:14:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.062 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.062 ************************************ 00:04:29.062 START TEST driver 00:04:29.062 ************************************ 00:04:29.062 17:14:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:29.062 * Looking for test storage... 00:04:29.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.062 17:14:37 -- setup/driver.sh@68 -- # setup reset 00:04:29.062 17:14:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.062 17:14:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.649 17:14:42 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:35.649 17:14:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.649 17:14:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.649 17:14:42 -- common/autotest_common.sh@10 -- # set +x 00:04:35.649 ************************************ 00:04:35.649 START TEST guess_driver 00:04:35.649 ************************************ 00:04:35.649 17:14:42 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:35.649 17:14:42 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:35.649 17:14:42 -- setup/driver.sh@47 -- # local fail=0 00:04:35.649 17:14:42 -- setup/driver.sh@49 -- # pick_driver 00:04:35.649 17:14:42 -- setup/driver.sh@36 -- # vfio 00:04:35.649 17:14:42 -- setup/driver.sh@21 -- # local iommu_grups 00:04:35.649 17:14:42 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:35.649 17:14:42 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:35.649 17:14:42 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:35.649 17:14:42 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:35.649 17:14:42 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:35.649 17:14:42 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:35.649 17:14:42 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:35.649 17:14:42 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:35.649 17:14:42 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:35.649 17:14:42 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:35.649 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:35.649 17:14:42 -- setup/driver.sh@30 -- # return 0 00:04:35.649 17:14:42 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:35.649 17:14:42 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:35.649 17:14:42 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:35.649 17:14:42 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:35.649 Looking for driver=vfio-pci 00:04:35.649 17:14:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.649 17:14:42 -- setup/driver.sh@45 -- # setup output config 00:04:35.649 17:14:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.649 17:14:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.197 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.197 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.197 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.197 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.197 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.197 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.197 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.197 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.197 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.197 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.197 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.198 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.198 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.198 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.198 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.198 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.198 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.198 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.198 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.198 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.198 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.198 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.198 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.198 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.198 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.198 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.198 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.459 17:14:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.459 17:14:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:38.459 17:14:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.720 17:14:47 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:38.720 17:14:47 -- setup/driver.sh@65 -- # setup reset 00:04:38.720 17:14:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.720 17:14:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.302 00:04:45.302 real 0m9.614s 00:04:45.302 user 0m3.166s 00:04:45.302 sys 0m5.647s 00:04:45.302 17:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.302 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 ************************************ 00:04:45.302 END TEST guess_driver 00:04:45.302 ************************************ 00:04:45.302 00:04:45.302 real 0m15.155s 00:04:45.302 user 0m4.812s 00:04:45.302 sys 0m8.703s 00:04:45.302 17:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.302 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 ************************************ 00:04:45.302 END TEST driver 00:04:45.302 ************************************ 00:04:45.302 17:14:52 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:45.302 17:14:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.302 17:14:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.302 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 ************************************ 00:04:45.302 START TEST devices 00:04:45.302 ************************************ 00:04:45.302 17:14:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:45.302 * Looking for test storage... 00:04:45.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.302 17:14:52 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.302 17:14:52 -- setup/devices.sh@192 -- # setup reset 00:04:45.302 17:14:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.302 17:14:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.606 17:14:57 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:48.607 17:14:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:48.607 17:14:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:48.607 17:14:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:48.607 17:14:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.607 17:14:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:48.607 17:14:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:48.607 17:14:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.607 17:14:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.607 17:14:57 -- setup/devices.sh@196 -- # blocks=() 00:04:48.607 17:14:57 -- setup/devices.sh@196 -- # declare -a blocks 00:04:48.607 17:14:57 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:48.607 17:14:57 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:48.607 17:14:57 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:48.607 17:14:57 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:48.607 17:14:57 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:48.607 17:14:57 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:48.607 17:14:57 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:48.607 17:14:57 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:48.607 17:14:57 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:48.607 17:14:57 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:48.607 17:14:57 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:48.607 No valid GPT data, bailing 00:04:48.607 17:14:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:48.607 17:14:57 -- scripts/common.sh@393 -- # pt= 00:04:48.607 17:14:57 -- scripts/common.sh@394 -- # return 1 00:04:48.607 17:14:57 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:48.607 17:14:57 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:48.607 17:14:57 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:48.607 17:14:57 -- setup/common.sh@80 -- # echo 1920383410176 00:04:48.607 17:14:57 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:48.607 17:14:57 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:48.607 17:14:57 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:48.607 17:14:57 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:48.607 17:14:57 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:48.607 17:14:57 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:48.607 17:14:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.607 17:14:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.607 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:04:48.607 ************************************ 00:04:48.607 START TEST nvme_mount 00:04:48.607 ************************************ 00:04:48.607 17:14:57 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:48.607 17:14:57 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:48.607 17:14:57 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:48.607 17:14:57 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.607 17:14:57 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.607 17:14:57 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:48.607 17:14:57 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:48.868 17:14:57 -- setup/common.sh@40 -- # local part_no=1 00:04:48.868 17:14:57 -- setup/common.sh@41 -- # local size=1073741824 00:04:48.868 17:14:57 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:48.868 17:14:57 -- setup/common.sh@44 -- # parts=() 00:04:48.868 17:14:57 -- setup/common.sh@44 -- # local parts 00:04:48.868 17:14:57 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:48.868 17:14:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:48.868 17:14:57 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:48.868 17:14:57 -- setup/common.sh@46 -- # (( part++ )) 00:04:48.868 17:14:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:48.868 17:14:57 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:48.868 17:14:57 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:48.868 17:14:57 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:49.810 Creating new GPT entries in memory. 00:04:49.810 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:49.810 other utilities. 00:04:49.810 17:14:58 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:49.810 17:14:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.810 17:14:58 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.810 17:14:58 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.810 17:14:58 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:50.752 Creating new GPT entries in memory. 00:04:50.752 The operation has completed successfully. 00:04:50.752 17:14:59 -- setup/common.sh@57 -- # (( part++ )) 00:04:50.752 17:14:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.752 17:14:59 -- setup/common.sh@62 -- # wait 2953365 00:04:51.013 17:14:59 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.013 17:14:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:51.013 17:14:59 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.013 17:14:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.013 17:14:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.013 17:14:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.013 17:14:59 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.013 17:14:59 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:51.013 17:14:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.013 17:14:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.013 17:14:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.013 17:14:59 -- setup/devices.sh@53 -- # local found=0 00:04:51.013 17:14:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.013 17:14:59 -- setup/devices.sh@56 -- # : 00:04:51.013 17:14:59 -- setup/devices.sh@59 -- # local pci status 00:04:51.013 17:14:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.013 17:14:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:51.013 17:14:59 -- setup/devices.sh@47 -- # setup output config 00:04:51.013 17:14:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.013 17:14:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:54.319 17:15:02 -- setup/devices.sh@63 -- # found=1 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.319 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.319 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.580 17:15:02 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.580 17:15:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 17:15:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.840 17:15:03 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:54.840 17:15:03 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.840 17:15:03 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.840 17:15:03 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.840 17:15:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:54.840 17:15:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.840 17:15:03 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.840 17:15:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.840 17:15:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.840 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.840 17:15:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.840 17:15:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.101 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:55.101 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:55.101 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.101 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.101 17:15:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:55.101 17:15:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:55.101 17:15:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.101 17:15:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.101 17:15:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.101 17:15:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.101 17:15:03 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.101 17:15:03 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:55.101 17:15:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.101 17:15:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.101 17:15:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.101 17:15:03 -- setup/devices.sh@53 -- # local found=0 00:04:55.101 17:15:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.101 17:15:03 -- setup/devices.sh@56 -- # : 00:04:55.101 17:15:03 -- setup/devices.sh@59 -- # local pci status 00:04:55.101 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.101 17:15:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:55.101 17:15:03 -- setup/devices.sh@47 -- # setup output config 00:04:55.101 17:15:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.101 17:15:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.310 17:15:06 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.310 17:15:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:59.310 17:15:06 -- setup/devices.sh@63 -- # found=1 00:04:59.310 17:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.311 17:15:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.311 17:15:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.311 17:15:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.311 17:15:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.311 17:15:07 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.311 17:15:07 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:59.311 17:15:07 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:59.311 17:15:07 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:59.311 17:15:07 -- setup/devices.sh@50 -- # local mount_point= 00:04:59.311 17:15:07 -- setup/devices.sh@51 -- # local test_file= 00:04:59.311 17:15:07 -- setup/devices.sh@53 -- # local found=0 00:04:59.311 17:15:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.311 17:15:07 -- setup/devices.sh@59 -- # local pci status 00:04:59.311 17:15:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 17:15:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:59.311 17:15:07 -- setup/devices.sh@47 -- # setup output config 00:04:59.311 17:15:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.311 17:15:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:02.620 17:15:11 -- setup/devices.sh@63 -- # found=1 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.620 17:15:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.620 17:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.193 17:15:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.193 17:15:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.193 17:15:11 -- setup/devices.sh@68 -- # return 0 00:05:03.193 17:15:11 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.193 17:15:11 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.193 17:15:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.193 17:15:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.193 17:15:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.193 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.193 00:05:03.193 real 0m14.367s 00:05:03.193 user 0m4.506s 00:05:03.193 sys 0m7.798s 00:05:03.193 17:15:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.193 17:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.193 ************************************ 00:05:03.193 END TEST nvme_mount 00:05:03.193 ************************************ 00:05:03.193 17:15:11 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.193 17:15:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.193 17:15:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.193 17:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.193 ************************************ 00:05:03.193 START TEST dm_mount 00:05:03.193 ************************************ 00:05:03.193 17:15:11 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:03.193 17:15:11 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.193 17:15:11 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.193 17:15:11 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.193 17:15:11 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.193 17:15:11 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.193 17:15:11 -- setup/common.sh@40 -- # local part_no=2 00:05:03.193 17:15:11 -- setup/common.sh@41 -- # local size=1073741824 00:05:03.193 17:15:11 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.194 17:15:11 -- setup/common.sh@44 -- # parts=() 00:05:03.194 17:15:11 -- setup/common.sh@44 -- # local parts 00:05:03.194 17:15:11 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.194 17:15:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.194 17:15:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.194 17:15:11 -- setup/common.sh@46 -- # (( part++ )) 00:05:03.194 17:15:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.194 17:15:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.194 17:15:11 -- setup/common.sh@46 -- # (( part++ )) 00:05:03.194 17:15:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.194 17:15:11 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:03.194 17:15:11 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.194 17:15:11 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:04.137 Creating new GPT entries in memory. 00:05:04.137 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.137 other utilities. 00:05:04.137 17:15:12 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.137 17:15:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.137 17:15:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.137 17:15:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.137 17:15:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:05.080 Creating new GPT entries in memory. 00:05:05.080 The operation has completed successfully. 00:05:05.080 17:15:13 -- setup/common.sh@57 -- # (( part++ )) 00:05:05.080 17:15:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.080 17:15:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.080 17:15:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.080 17:15:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:06.466 The operation has completed successfully. 00:05:06.466 17:15:14 -- setup/common.sh@57 -- # (( part++ )) 00:05:06.466 17:15:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.466 17:15:14 -- setup/common.sh@62 -- # wait 2959442 00:05:06.466 17:15:14 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.466 17:15:14 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.466 17:15:14 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.466 17:15:14 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.466 17:15:14 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.466 17:15:14 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.466 17:15:14 -- setup/devices.sh@161 -- # break 00:05:06.466 17:15:14 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.466 17:15:14 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.466 17:15:14 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:05:06.466 17:15:14 -- setup/devices.sh@166 -- # dm=dm-1 00:05:06.466 17:15:14 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:05:06.466 17:15:14 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:05:06.466 17:15:14 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.466 17:15:14 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:06.466 17:15:14 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.466 17:15:14 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.466 17:15:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.466 17:15:14 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.466 17:15:14 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.466 17:15:14 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:06.466 17:15:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.466 17:15:14 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.466 17:15:14 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.466 17:15:14 -- setup/devices.sh@53 -- # local found=0 00:05:06.466 17:15:14 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.466 17:15:14 -- setup/devices.sh@56 -- # : 00:05:06.466 17:15:14 -- setup/devices.sh@59 -- # local pci status 00:05:06.466 17:15:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.466 17:15:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:06.466 17:15:14 -- setup/devices.sh@47 -- # setup output config 00:05:06.466 17:15:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.466 17:15:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:09.774 17:15:18 -- setup/devices.sh@63 -- # found=1 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.774 17:15:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.774 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.347 17:15:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.347 17:15:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:10.347 17:15:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.347 17:15:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:10.347 17:15:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.347 17:15:18 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.347 17:15:18 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:05:10.347 17:15:18 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:10.347 17:15:18 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:05:10.347 17:15:18 -- setup/devices.sh@50 -- # local mount_point= 00:05:10.347 17:15:18 -- setup/devices.sh@51 -- # local test_file= 00:05:10.347 17:15:18 -- setup/devices.sh@53 -- # local found=0 00:05:10.347 17:15:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.347 17:15:18 -- setup/devices.sh@59 -- # local pci status 00:05:10.347 17:15:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.347 17:15:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:10.347 17:15:18 -- setup/devices.sh@47 -- # setup output config 00:05:10.347 17:15:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.347 17:15:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.649 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:13.650 17:15:22 -- setup/devices.sh@63 -- # found=1 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.650 17:15:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.650 17:15:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.221 17:15:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.221 17:15:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:14.221 17:15:22 -- setup/devices.sh@68 -- # return 0 00:05:14.221 17:15:22 -- setup/devices.sh@187 -- # cleanup_dm 00:05:14.222 17:15:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.222 17:15:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:14.222 17:15:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:14.222 17:15:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.222 17:15:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:14.222 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.222 17:15:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:14.222 17:15:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:14.222 00:05:14.222 real 0m11.086s 00:05:14.222 user 0m2.938s 00:05:14.222 sys 0m5.227s 00:05:14.222 17:15:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.222 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.222 ************************************ 00:05:14.222 END TEST dm_mount 00:05:14.222 ************************************ 00:05:14.222 17:15:22 -- setup/devices.sh@1 -- # cleanup 00:05:14.222 17:15:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:14.222 17:15:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.222 17:15:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.222 17:15:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.222 17:15:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.222 17:15:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.489 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:14.489 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:14.489 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.489 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.489 17:15:22 -- setup/devices.sh@12 -- # cleanup_dm 00:05:14.489 17:15:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.489 17:15:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:14.489 17:15:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.489 17:15:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:14.489 17:15:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.489 17:15:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:14.489 00:05:14.489 real 0m30.328s 00:05:14.489 user 0m9.121s 00:05:14.489 sys 0m16.112s 00:05:14.489 17:15:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.489 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.489 ************************************ 00:05:14.489 END TEST devices 00:05:14.489 ************************************ 00:05:14.489 00:05:14.489 real 1m43.184s 00:05:14.489 user 0m34.771s 00:05:14.489 sys 0m59.988s 00:05:14.489 17:15:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.489 17:15:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.489 ************************************ 00:05:14.489 END TEST setup.sh 00:05:14.489 ************************************ 00:05:14.869 17:15:23 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:18.187 Hugepages 00:05:18.187 node hugesize free / total 00:05:18.187 node0 1048576kB 0 / 0 00:05:18.187 node0 2048kB 2048 / 2048 00:05:18.187 node1 1048576kB 0 / 0 00:05:18.187 node1 2048kB 0 / 0 00:05:18.187 00:05:18.187 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.187 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:18.187 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:18.187 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:18.187 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:18.187 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:18.187 17:15:26 -- spdk/autotest.sh@141 -- # uname -s 00:05:18.187 17:15:26 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:18.187 17:15:26 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:18.187 17:15:26 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.396 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:22.396 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:23.783 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:24.044 17:15:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:25.431 17:15:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:25.431 17:15:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:25.431 17:15:33 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.431 17:15:33 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:25.431 17:15:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:25.431 17:15:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:25.431 17:15:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.431 17:15:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.431 17:15:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:25.431 17:15:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:25.431 17:15:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:25.431 17:15:33 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.735 Waiting for block devices as requested 00:05:28.735 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:28.997 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:28.997 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:28.997 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:28.997 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:29.258 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:29.258 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:29.258 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:29.519 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:29.519 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:29.780 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:29.780 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:29.780 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:30.041 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:30.041 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:30.041 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:30.301 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:30.562 17:15:38 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:30.562 17:15:38 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:30.562 17:15:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:30.562 17:15:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:30.562 17:15:38 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:30.562 17:15:38 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:30.563 17:15:38 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:30.563 17:15:38 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:30.563 17:15:38 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:30.563 17:15:38 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:30.563 17:15:38 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:30.563 17:15:38 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:30.563 17:15:38 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:30.563 17:15:38 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:30.563 17:15:38 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:30.563 17:15:38 -- common/autotest_common.sh@1542 -- # continue 00:05:30.563 17:15:38 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:30.563 17:15:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.563 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.563 17:15:38 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:30.563 17:15:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:30.563 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.563 17:15:38 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:34.769 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.769 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:34.769 17:15:43 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:34.769 17:15:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:34.769 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:34.769 17:15:43 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:34.769 17:15:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:34.769 17:15:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.769 17:15:43 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:34.769 17:15:43 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:34.769 17:15:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:34.769 17:15:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:34.769 17:15:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:34.769 17:15:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.769 17:15:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.769 17:15:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:35.030 17:15:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:35.030 17:15:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:35.030 17:15:43 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:35.030 17:15:43 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:35.030 17:15:43 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:35.030 17:15:43 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:35.030 17:15:43 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:35.030 17:15:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:35.030 17:15:43 -- common/autotest_common.sh@1578 -- # return 0 00:05:35.030 17:15:43 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:35.030 17:15:43 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:35.030 17:15:43 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:35.030 17:15:43 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:35.030 17:15:43 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:35.030 17:15:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:35.030 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.030 17:15:43 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.030 17:15:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.030 17:15:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.030 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.030 ************************************ 00:05:35.030 START TEST env 00:05:35.030 ************************************ 00:05:35.030 17:15:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.030 * Looking for test storage... 00:05:35.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:35.030 17:15:43 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.030 17:15:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.030 17:15:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.030 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.030 ************************************ 00:05:35.030 START TEST env_memory 00:05:35.030 ************************************ 00:05:35.030 17:15:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.030 00:05:35.030 00:05:35.030 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.030 http://cunit.sourceforge.net/ 00:05:35.030 00:05:35.030 00:05:35.030 Suite: memory 00:05:35.030 Test: alloc and free memory map ...[2024-10-13 17:15:43.504394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.030 passed 00:05:35.030 Test: mem map translation ...[2024-10-13 17:15:43.530097] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.030 [2024-10-13 17:15:43.530125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.030 [2024-10-13 17:15:43.530173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.030 [2024-10-13 17:15:43.530183] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.293 passed 00:05:35.293 Test: mem map registration ...[2024-10-13 17:15:43.585390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:35.293 [2024-10-13 17:15:43.585411] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:35.293 passed 00:05:35.293 Test: mem map adjacent registrations ...passed 00:05:35.293 00:05:35.293 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.293 suites 1 1 n/a 0 0 00:05:35.293 tests 4 4 4 0 0 00:05:35.293 asserts 152 152 152 0 n/a 00:05:35.293 00:05:35.293 Elapsed time = 0.193 seconds 00:05:35.293 00:05:35.293 real 0m0.208s 00:05:35.293 user 0m0.200s 00:05:35.293 sys 0m0.007s 00:05:35.293 17:15:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.293 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.293 ************************************ 00:05:35.293 END TEST env_memory 00:05:35.293 ************************************ 00:05:35.293 17:15:43 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.293 17:15:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.293 17:15:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.293 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.293 ************************************ 00:05:35.293 START TEST env_vtophys 00:05:35.293 ************************************ 00:05:35.293 17:15:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.293 EAL: lib.eal log level changed from notice to debug 00:05:35.293 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.293 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.293 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.293 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.293 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.293 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.293 EAL: Detected lcore 6 as core 6 on socket 0 00:05:35.293 EAL: Detected lcore 7 as core 7 on socket 0 00:05:35.293 EAL: Detected lcore 8 as core 8 on socket 0 00:05:35.293 EAL: Detected lcore 9 as core 9 on socket 0 00:05:35.293 EAL: Detected lcore 10 as core 10 on socket 0 00:05:35.293 EAL: Detected lcore 11 as core 11 on socket 0 00:05:35.293 EAL: Detected lcore 12 as core 12 on socket 0 00:05:35.293 EAL: Detected lcore 13 as core 13 on socket 0 00:05:35.293 EAL: Detected lcore 14 as core 14 on socket 0 00:05:35.293 EAL: Detected lcore 15 as core 15 on socket 0 00:05:35.293 EAL: Detected lcore 16 as core 16 on socket 0 00:05:35.293 EAL: Detected lcore 17 as core 17 on socket 0 00:05:35.293 EAL: Detected lcore 18 as core 18 on socket 0 00:05:35.293 EAL: Detected lcore 19 as core 19 on socket 0 00:05:35.293 EAL: Detected lcore 20 as core 20 on socket 0 00:05:35.293 EAL: Detected lcore 21 as core 21 on socket 0 00:05:35.293 EAL: Detected lcore 22 as core 22 on socket 0 00:05:35.293 EAL: Detected lcore 23 as core 23 on socket 0 00:05:35.293 EAL: Detected lcore 24 as core 24 on socket 0 00:05:35.293 EAL: Detected lcore 25 as core 25 on socket 0 00:05:35.293 EAL: Detected lcore 26 as core 26 on socket 0 00:05:35.293 EAL: Detected lcore 27 as core 27 on socket 0 00:05:35.293 EAL: Detected lcore 28 as core 28 on socket 0 00:05:35.293 EAL: Detected lcore 29 as core 29 on socket 0 00:05:35.293 EAL: Detected lcore 30 as core 30 on socket 0 00:05:35.293 EAL: Detected lcore 31 as core 31 on socket 0 00:05:35.293 EAL: Detected lcore 32 as core 32 on socket 0 00:05:35.293 EAL: Detected lcore 33 as core 33 on socket 0 00:05:35.293 EAL: Detected lcore 34 as core 34 on socket 0 00:05:35.293 EAL: Detected lcore 35 as core 35 on socket 0 00:05:35.293 EAL: Detected lcore 36 as core 0 on socket 1 00:05:35.293 EAL: Detected lcore 37 as core 1 on socket 1 00:05:35.293 EAL: Detected lcore 38 as core 2 on socket 1 00:05:35.293 EAL: Detected lcore 39 as core 3 on socket 1 00:05:35.293 EAL: Detected lcore 40 as core 4 on socket 1 00:05:35.293 EAL: Detected lcore 41 as core 5 on socket 1 00:05:35.293 EAL: Detected lcore 42 as core 6 on socket 1 00:05:35.293 EAL: Detected lcore 43 as core 7 on socket 1 00:05:35.293 EAL: Detected lcore 44 as core 8 on socket 1 00:05:35.293 EAL: Detected lcore 45 as core 9 on socket 1 00:05:35.293 EAL: Detected lcore 46 as core 10 on socket 1 00:05:35.293 EAL: Detected lcore 47 as core 11 on socket 1 00:05:35.293 EAL: Detected lcore 48 as core 12 on socket 1 00:05:35.293 EAL: Detected lcore 49 as core 13 on socket 1 00:05:35.293 EAL: Detected lcore 50 as core 14 on socket 1 00:05:35.293 EAL: Detected lcore 51 as core 15 on socket 1 00:05:35.293 EAL: Detected lcore 52 as core 16 on socket 1 00:05:35.293 EAL: Detected lcore 53 as core 17 on socket 1 00:05:35.293 EAL: Detected lcore 54 as core 18 on socket 1 00:05:35.293 EAL: Detected lcore 55 as core 19 on socket 1 00:05:35.293 EAL: Detected lcore 56 as core 20 on socket 1 00:05:35.293 EAL: Detected lcore 57 as core 21 on socket 1 00:05:35.293 EAL: Detected lcore 58 as core 22 on socket 1 00:05:35.293 EAL: Detected lcore 59 as core 23 on socket 1 00:05:35.293 EAL: Detected lcore 60 as core 24 on socket 1 00:05:35.293 EAL: Detected lcore 61 as core 25 on socket 1 00:05:35.293 EAL: Detected lcore 62 as core 26 on socket 1 00:05:35.294 EAL: Detected lcore 63 as core 27 on socket 1 00:05:35.294 EAL: Detected lcore 64 as core 28 on socket 1 00:05:35.294 EAL: Detected lcore 65 as core 29 on socket 1 00:05:35.294 EAL: Detected lcore 66 as core 30 on socket 1 00:05:35.294 EAL: Detected lcore 67 as core 31 on socket 1 00:05:35.294 EAL: Detected lcore 68 as core 32 on socket 1 00:05:35.294 EAL: Detected lcore 69 as core 33 on socket 1 00:05:35.294 EAL: Detected lcore 70 as core 34 on socket 1 00:05:35.294 EAL: Detected lcore 71 as core 35 on socket 1 00:05:35.294 EAL: Detected lcore 72 as core 0 on socket 0 00:05:35.294 EAL: Detected lcore 73 as core 1 on socket 0 00:05:35.294 EAL: Detected lcore 74 as core 2 on socket 0 00:05:35.294 EAL: Detected lcore 75 as core 3 on socket 0 00:05:35.294 EAL: Detected lcore 76 as core 4 on socket 0 00:05:35.294 EAL: Detected lcore 77 as core 5 on socket 0 00:05:35.294 EAL: Detected lcore 78 as core 6 on socket 0 00:05:35.294 EAL: Detected lcore 79 as core 7 on socket 0 00:05:35.294 EAL: Detected lcore 80 as core 8 on socket 0 00:05:35.294 EAL: Detected lcore 81 as core 9 on socket 0 00:05:35.294 EAL: Detected lcore 82 as core 10 on socket 0 00:05:35.294 EAL: Detected lcore 83 as core 11 on socket 0 00:05:35.294 EAL: Detected lcore 84 as core 12 on socket 0 00:05:35.294 EAL: Detected lcore 85 as core 13 on socket 0 00:05:35.294 EAL: Detected lcore 86 as core 14 on socket 0 00:05:35.294 EAL: Detected lcore 87 as core 15 on socket 0 00:05:35.294 EAL: Detected lcore 88 as core 16 on socket 0 00:05:35.294 EAL: Detected lcore 89 as core 17 on socket 0 00:05:35.294 EAL: Detected lcore 90 as core 18 on socket 0 00:05:35.294 EAL: Detected lcore 91 as core 19 on socket 0 00:05:35.294 EAL: Detected lcore 92 as core 20 on socket 0 00:05:35.294 EAL: Detected lcore 93 as core 21 on socket 0 00:05:35.294 EAL: Detected lcore 94 as core 22 on socket 0 00:05:35.294 EAL: Detected lcore 95 as core 23 on socket 0 00:05:35.294 EAL: Detected lcore 96 as core 24 on socket 0 00:05:35.294 EAL: Detected lcore 97 as core 25 on socket 0 00:05:35.294 EAL: Detected lcore 98 as core 26 on socket 0 00:05:35.294 EAL: Detected lcore 99 as core 27 on socket 0 00:05:35.294 EAL: Detected lcore 100 as core 28 on socket 0 00:05:35.294 EAL: Detected lcore 101 as core 29 on socket 0 00:05:35.294 EAL: Detected lcore 102 as core 30 on socket 0 00:05:35.294 EAL: Detected lcore 103 as core 31 on socket 0 00:05:35.294 EAL: Detected lcore 104 as core 32 on socket 0 00:05:35.294 EAL: Detected lcore 105 as core 33 on socket 0 00:05:35.294 EAL: Detected lcore 106 as core 34 on socket 0 00:05:35.294 EAL: Detected lcore 107 as core 35 on socket 0 00:05:35.294 EAL: Detected lcore 108 as core 0 on socket 1 00:05:35.294 EAL: Detected lcore 109 as core 1 on socket 1 00:05:35.294 EAL: Detected lcore 110 as core 2 on socket 1 00:05:35.294 EAL: Detected lcore 111 as core 3 on socket 1 00:05:35.294 EAL: Detected lcore 112 as core 4 on socket 1 00:05:35.294 EAL: Detected lcore 113 as core 5 on socket 1 00:05:35.294 EAL: Detected lcore 114 as core 6 on socket 1 00:05:35.294 EAL: Detected lcore 115 as core 7 on socket 1 00:05:35.294 EAL: Detected lcore 116 as core 8 on socket 1 00:05:35.294 EAL: Detected lcore 117 as core 9 on socket 1 00:05:35.294 EAL: Detected lcore 118 as core 10 on socket 1 00:05:35.294 EAL: Detected lcore 119 as core 11 on socket 1 00:05:35.294 EAL: Detected lcore 120 as core 12 on socket 1 00:05:35.294 EAL: Detected lcore 121 as core 13 on socket 1 00:05:35.294 EAL: Detected lcore 122 as core 14 on socket 1 00:05:35.294 EAL: Detected lcore 123 as core 15 on socket 1 00:05:35.294 EAL: Detected lcore 124 as core 16 on socket 1 00:05:35.294 EAL: Detected lcore 125 as core 17 on socket 1 00:05:35.294 EAL: Detected lcore 126 as core 18 on socket 1 00:05:35.294 EAL: Detected lcore 127 as core 19 on socket 1 00:05:35.294 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:35.294 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:35.294 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:35.294 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:35.294 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:35.294 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:35.294 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:35.294 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:35.294 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:35.294 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:35.294 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:35.294 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:35.294 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:35.294 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:35.294 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:35.294 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:35.294 EAL: Maximum logical cores by configuration: 128 00:05:35.294 EAL: Detected CPU lcores: 128 00:05:35.294 EAL: Detected NUMA nodes: 2 00:05:35.294 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:35.294 EAL: Detected shared linkage of DPDK 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:35.294 EAL: Registered [vdev] bus. 00:05:35.294 EAL: bus.vdev log level changed from disabled to notice 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:35.294 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:35.294 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:35.294 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:35.294 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.294 EAL: No shared files mode enabled, IPC is disabled 00:05:35.294 EAL: Bus pci wants IOVA as 'DC' 00:05:35.294 EAL: Bus vdev wants IOVA as 'DC' 00:05:35.294 EAL: Buses did not request a specific IOVA mode. 00:05:35.294 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.294 EAL: Selected IOVA mode 'VA' 00:05:35.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.294 EAL: Probing VFIO support... 00:05:35.294 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.294 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.294 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.294 EAL: VFIO support initialized 00:05:35.294 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.294 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.294 EAL: Setting up physically contiguous memory... 00:05:35.294 EAL: Setting maximum number of open files to 524288 00:05:35.294 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.294 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.294 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.294 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.294 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.294 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.294 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.294 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.294 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.294 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.294 EAL: Hugepages will be freed exactly as allocated. 00:05:35.294 EAL: No shared files mode enabled, IPC is disabled 00:05:35.294 EAL: No shared files mode enabled, IPC is disabled 00:05:35.294 EAL: TSC frequency is ~2400000 KHz 00:05:35.294 EAL: Main lcore 0 is ready (tid=7fcffb47ba00;cpuset=[0]) 00:05:35.294 EAL: Trying to obtain current memory policy. 00:05:35.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.295 EAL: Restoring previous memory policy: 0 00:05:35.295 EAL: request: mp_malloc_sync 00:05:35.295 EAL: No shared files mode enabled, IPC is disabled 00:05:35.295 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.295 EAL: No shared files mode enabled, IPC is disabled 00:05:35.295 EAL: No shared files mode enabled, IPC is disabled 00:05:35.295 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.295 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.295 00:05:35.295 00:05:35.295 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.295 http://cunit.sourceforge.net/ 00:05:35.295 00:05:35.295 00:05:35.295 Suite: components_suite 00:05:35.295 Test: vtophys_malloc_test ...passed 00:05:35.295 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.295 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.295 EAL: Restoring previous memory policy: 4 00:05:35.295 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.295 EAL: request: mp_malloc_sync 00:05:35.295 EAL: No shared files mode enabled, IPC is disabled 00:05:35.295 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.295 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.556 EAL: Trying to obtain current memory policy. 00:05:35.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.556 EAL: Restoring previous memory policy: 4 00:05:35.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.556 EAL: Trying to obtain current memory policy. 00:05:35.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.556 EAL: Restoring previous memory policy: 4 00:05:35.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.556 EAL: Trying to obtain current memory policy. 00:05:35.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.556 EAL: Restoring previous memory policy: 4 00:05:35.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.556 EAL: request: mp_malloc_sync 00:05:35.556 EAL: No shared files mode enabled, IPC is disabled 00:05:35.556 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.557 EAL: Trying to obtain current memory policy. 00:05:35.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.557 EAL: Restoring previous memory policy: 4 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.557 EAL: Trying to obtain current memory policy. 00:05:35.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.557 EAL: Restoring previous memory policy: 4 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.557 EAL: Trying to obtain current memory policy. 00:05:35.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.557 EAL: Restoring previous memory policy: 4 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.557 EAL: Trying to obtain current memory policy. 00:05:35.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.557 EAL: Restoring previous memory policy: 4 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.557 EAL: Trying to obtain current memory policy. 00:05:35.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.557 EAL: Restoring previous memory policy: 4 00:05:35.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.557 EAL: request: mp_malloc_sync 00:05:35.557 EAL: No shared files mode enabled, IPC is disabled 00:05:35.557 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.817 EAL: request: mp_malloc_sync 00:05:35.817 EAL: No shared files mode enabled, IPC is disabled 00:05:35.817 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.817 EAL: Trying to obtain current memory policy. 00:05:35.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.817 EAL: Restoring previous memory policy: 4 00:05:35.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.817 EAL: request: mp_malloc_sync 00:05:35.817 EAL: No shared files mode enabled, IPC is disabled 00:05:35.817 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.078 EAL: request: mp_malloc_sync 00:05:36.078 EAL: No shared files mode enabled, IPC is disabled 00:05:36.078 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.078 passed 00:05:36.078 00:05:36.078 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.078 suites 1 1 n/a 0 0 00:05:36.078 tests 2 2 2 0 0 00:05:36.078 asserts 497 497 497 0 n/a 00:05:36.078 00:05:36.078 Elapsed time = 0.680 seconds 00:05:36.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.078 EAL: request: mp_malloc_sync 00:05:36.078 EAL: No shared files mode enabled, IPC is disabled 00:05:36.078 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.078 EAL: No shared files mode enabled, IPC is disabled 00:05:36.078 EAL: No shared files mode enabled, IPC is disabled 00:05:36.078 EAL: No shared files mode enabled, IPC is disabled 00:05:36.078 00:05:36.078 real 0m0.815s 00:05:36.078 user 0m0.422s 00:05:36.078 sys 0m0.369s 00:05:36.078 17:15:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.078 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 ************************************ 00:05:36.078 END TEST env_vtophys 00:05:36.078 ************************************ 00:05:36.078 17:15:44 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.078 17:15:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.078 17:15:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.078 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 ************************************ 00:05:36.078 START TEST env_pci 00:05:36.078 ************************************ 00:05:36.078 17:15:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.078 00:05:36.078 00:05:36.078 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.078 http://cunit.sourceforge.net/ 00:05:36.078 00:05:36.078 00:05:36.078 Suite: pci 00:05:36.078 Test: pci_hook ...[2024-10-13 17:15:44.583026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2971275 has claimed it 00:05:36.340 EAL: Cannot find device (10000:00:01.0) 00:05:36.340 EAL: Failed to attach device on primary process 00:05:36.340 passed 00:05:36.340 00:05:36.340 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.340 suites 1 1 n/a 0 0 00:05:36.340 tests 1 1 1 0 0 00:05:36.340 asserts 25 25 25 0 n/a 00:05:36.340 00:05:36.340 Elapsed time = 0.031 seconds 00:05:36.340 00:05:36.340 real 0m0.051s 00:05:36.340 user 0m0.015s 00:05:36.340 sys 0m0.036s 00:05:36.340 17:15:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.340 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:36.340 ************************************ 00:05:36.340 END TEST env_pci 00:05:36.340 ************************************ 00:05:36.340 17:15:44 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.340 17:15:44 -- env/env.sh@15 -- # uname 00:05:36.340 17:15:44 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.340 17:15:44 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.340 17:15:44 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.340 17:15:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:36.340 17:15:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.340 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:36.340 ************************************ 00:05:36.340 START TEST env_dpdk_post_init 00:05:36.340 ************************************ 00:05:36.340 17:15:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.340 EAL: Detected CPU lcores: 128 00:05:36.340 EAL: Detected NUMA nodes: 2 00:05:36.340 EAL: Detected shared linkage of DPDK 00:05:36.340 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.340 EAL: Selected IOVA mode 'VA' 00:05:36.340 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.340 EAL: VFIO support initialized 00:05:36.340 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.340 EAL: Using IOMMU type 1 (Type 1) 00:05:36.601 EAL: Ignore mapping IO port bar(1) 00:05:36.601 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:36.863 EAL: Ignore mapping IO port bar(1) 00:05:36.863 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:36.863 EAL: Ignore mapping IO port bar(1) 00:05:37.124 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:37.124 EAL: Ignore mapping IO port bar(1) 00:05:37.385 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:37.385 EAL: Ignore mapping IO port bar(1) 00:05:37.647 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:37.647 EAL: Ignore mapping IO port bar(1) 00:05:37.647 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:37.908 EAL: Ignore mapping IO port bar(1) 00:05:37.908 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:38.169 EAL: Ignore mapping IO port bar(1) 00:05:38.169 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:38.430 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:38.430 EAL: Ignore mapping IO port bar(1) 00:05:38.691 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:38.691 EAL: Ignore mapping IO port bar(1) 00:05:38.952 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:38.952 EAL: Ignore mapping IO port bar(1) 00:05:39.214 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:39.214 EAL: Ignore mapping IO port bar(1) 00:05:39.214 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:39.476 EAL: Ignore mapping IO port bar(1) 00:05:39.476 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:39.737 EAL: Ignore mapping IO port bar(1) 00:05:39.737 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:39.998 EAL: Ignore mapping IO port bar(1) 00:05:39.998 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:39.998 EAL: Ignore mapping IO port bar(1) 00:05:40.259 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:40.259 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:40.259 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:40.259 Starting DPDK initialization... 00:05:40.259 Starting SPDK post initialization... 00:05:40.259 SPDK NVMe probe 00:05:40.259 Attaching to 0000:65:00.0 00:05:40.259 Attached to 0000:65:00.0 00:05:40.259 Cleaning up... 00:05:42.174 00:05:42.174 real 0m5.738s 00:05:42.174 user 0m0.170s 00:05:42.174 sys 0m0.115s 00:05:42.174 17:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.174 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.174 ************************************ 00:05:42.174 END TEST env_dpdk_post_init 00:05:42.174 ************************************ 00:05:42.174 17:15:50 -- env/env.sh@26 -- # uname 00:05:42.174 17:15:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.174 17:15:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.174 17:15:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.174 17:15:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.174 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.174 ************************************ 00:05:42.174 START TEST env_mem_callbacks 00:05:42.174 ************************************ 00:05:42.174 17:15:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.174 EAL: Detected CPU lcores: 128 00:05:42.174 EAL: Detected NUMA nodes: 2 00:05:42.174 EAL: Detected shared linkage of DPDK 00:05:42.174 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.174 EAL: Selected IOVA mode 'VA' 00:05:42.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.174 EAL: VFIO support initialized 00:05:42.174 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.174 00:05:42.174 00:05:42.174 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.174 http://cunit.sourceforge.net/ 00:05:42.174 00:05:42.174 00:05:42.174 Suite: memory 00:05:42.174 Test: test ... 00:05:42.174 register 0x200000200000 2097152 00:05:42.174 malloc 3145728 00:05:42.174 register 0x200000400000 4194304 00:05:42.174 buf 0x200000500000 len 3145728 PASSED 00:05:42.174 malloc 64 00:05:42.174 buf 0x2000004fff40 len 64 PASSED 00:05:42.174 malloc 4194304 00:05:42.174 register 0x200000800000 6291456 00:05:42.174 buf 0x200000a00000 len 4194304 PASSED 00:05:42.174 free 0x200000500000 3145728 00:05:42.174 free 0x2000004fff40 64 00:05:42.174 unregister 0x200000400000 4194304 PASSED 00:05:42.174 free 0x200000a00000 4194304 00:05:42.174 unregister 0x200000800000 6291456 PASSED 00:05:42.174 malloc 8388608 00:05:42.174 register 0x200000400000 10485760 00:05:42.174 buf 0x200000600000 len 8388608 PASSED 00:05:42.174 free 0x200000600000 8388608 00:05:42.174 unregister 0x200000400000 10485760 PASSED 00:05:42.174 passed 00:05:42.174 00:05:42.174 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.174 suites 1 1 n/a 0 0 00:05:42.174 tests 1 1 1 0 0 00:05:42.174 asserts 15 15 15 0 n/a 00:05:42.174 00:05:42.174 Elapsed time = 0.010 seconds 00:05:42.174 00:05:42.174 real 0m0.068s 00:05:42.174 user 0m0.016s 00:05:42.174 sys 0m0.051s 00:05:42.174 17:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.174 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.174 ************************************ 00:05:42.174 END TEST env_mem_callbacks 00:05:42.174 ************************************ 00:05:42.174 00:05:42.174 real 0m7.224s 00:05:42.174 user 0m0.943s 00:05:42.174 sys 0m0.845s 00:05:42.174 17:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.174 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.174 ************************************ 00:05:42.174 END TEST env 00:05:42.174 ************************************ 00:05:42.174 17:15:50 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.174 17:15:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.174 17:15:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.174 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.174 ************************************ 00:05:42.174 START TEST rpc 00:05:42.174 ************************************ 00:05:42.174 17:15:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.437 * Looking for test storage... 00:05:42.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.437 17:15:50 -- rpc/rpc.sh@65 -- # spdk_pid=2972697 00:05:42.437 17:15:50 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.437 17:15:50 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:42.437 17:15:50 -- rpc/rpc.sh@67 -- # waitforlisten 2972697 00:05:42.437 17:15:50 -- common/autotest_common.sh@819 -- # '[' -z 2972697 ']' 00:05:42.437 17:15:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.437 17:15:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.437 17:15:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.437 17:15:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.437 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.437 [2024-10-13 17:15:50.773747] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:42.437 [2024-10-13 17:15:50.773821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972697 ] 00:05:42.437 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.437 [2024-10-13 17:15:50.859730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.437 [2024-10-13 17:15:50.903731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.437 [2024-10-13 17:15:50.903882] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:42.437 [2024-10-13 17:15:50.903895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2972697' to capture a snapshot of events at runtime. 00:05:42.437 [2024-10-13 17:15:50.903902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2972697 for offline analysis/debug. 00:05:42.437 [2024-10-13 17:15:50.903940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.389 17:15:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.389 17:15:51 -- common/autotest_common.sh@852 -- # return 0 00:05:43.389 17:15:51 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.389 17:15:51 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.389 17:15:51 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.389 17:15:51 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.389 17:15:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.389 17:15:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.389 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 ************************************ 00:05:43.389 START TEST rpc_integrity 00:05:43.389 ************************************ 00:05:43.389 17:15:51 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:43.389 17:15:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.389 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.389 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.389 17:15:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.389 17:15:51 -- rpc/rpc.sh@13 -- # jq length 00:05:43.389 17:15:51 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.389 17:15:51 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.389 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.389 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.389 17:15:51 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.389 17:15:51 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.389 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.389 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.389 17:15:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.389 { 00:05:43.389 "name": "Malloc0", 00:05:43.389 "aliases": [ 00:05:43.389 "db8840ee-455a-487c-9a99-dd991969efbc" 00:05:43.389 ], 00:05:43.389 "product_name": "Malloc disk", 00:05:43.389 "block_size": 512, 00:05:43.389 "num_blocks": 16384, 00:05:43.389 "uuid": "db8840ee-455a-487c-9a99-dd991969efbc", 00:05:43.389 "assigned_rate_limits": { 00:05:43.389 "rw_ios_per_sec": 0, 00:05:43.389 "rw_mbytes_per_sec": 0, 00:05:43.389 "r_mbytes_per_sec": 0, 00:05:43.389 "w_mbytes_per_sec": 0 00:05:43.389 }, 00:05:43.389 "claimed": false, 00:05:43.389 "zoned": false, 00:05:43.389 "supported_io_types": { 00:05:43.389 "read": true, 00:05:43.389 "write": true, 00:05:43.389 "unmap": true, 00:05:43.389 "write_zeroes": true, 00:05:43.389 "flush": true, 00:05:43.389 "reset": true, 00:05:43.389 "compare": false, 00:05:43.389 "compare_and_write": false, 00:05:43.389 "abort": true, 00:05:43.389 "nvme_admin": false, 00:05:43.389 "nvme_io": false 00:05:43.389 }, 00:05:43.389 "memory_domains": [ 00:05:43.389 { 00:05:43.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.389 "dma_device_type": 2 00:05:43.389 } 00:05:43.389 ], 00:05:43.389 "driver_specific": {} 00:05:43.389 } 00:05:43.389 ]' 00:05:43.389 17:15:51 -- rpc/rpc.sh@17 -- # jq length 00:05:43.389 17:15:51 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.389 17:15:51 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:43.389 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.389 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 [2024-10-13 17:15:51.728398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:43.389 [2024-10-13 17:15:51.728452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.389 [2024-10-13 17:15:51.728467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25195c0 00:05:43.389 [2024-10-13 17:15:51.728476] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.389 [2024-10-13 17:15:51.730015] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.389 [2024-10-13 17:15:51.730052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.389 Passthru0 00:05:43.389 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.389 17:15:51 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.389 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.389 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.389 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.389 17:15:51 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.389 { 00:05:43.389 "name": "Malloc0", 00:05:43.389 "aliases": [ 00:05:43.389 "db8840ee-455a-487c-9a99-dd991969efbc" 00:05:43.389 ], 00:05:43.389 "product_name": "Malloc disk", 00:05:43.389 "block_size": 512, 00:05:43.389 "num_blocks": 16384, 00:05:43.389 "uuid": "db8840ee-455a-487c-9a99-dd991969efbc", 00:05:43.389 "assigned_rate_limits": { 00:05:43.389 "rw_ios_per_sec": 0, 00:05:43.389 "rw_mbytes_per_sec": 0, 00:05:43.389 "r_mbytes_per_sec": 0, 00:05:43.389 "w_mbytes_per_sec": 0 00:05:43.389 }, 00:05:43.389 "claimed": true, 00:05:43.389 "claim_type": "exclusive_write", 00:05:43.389 "zoned": false, 00:05:43.389 "supported_io_types": { 00:05:43.389 "read": true, 00:05:43.389 "write": true, 00:05:43.389 "unmap": true, 00:05:43.389 "write_zeroes": true, 00:05:43.389 "flush": true, 00:05:43.389 "reset": true, 00:05:43.389 "compare": false, 00:05:43.389 "compare_and_write": false, 00:05:43.389 "abort": true, 00:05:43.389 "nvme_admin": false, 00:05:43.389 "nvme_io": false 00:05:43.389 }, 00:05:43.389 "memory_domains": [ 00:05:43.389 { 00:05:43.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.389 "dma_device_type": 2 00:05:43.389 } 00:05:43.389 ], 00:05:43.389 "driver_specific": {} 00:05:43.389 }, 00:05:43.389 { 00:05:43.389 "name": "Passthru0", 00:05:43.389 "aliases": [ 00:05:43.389 "cdcf63f0-ae5e-5842-90d3-b78ce03f13e1" 00:05:43.389 ], 00:05:43.389 "product_name": "passthru", 00:05:43.389 "block_size": 512, 00:05:43.389 "num_blocks": 16384, 00:05:43.389 "uuid": "cdcf63f0-ae5e-5842-90d3-b78ce03f13e1", 00:05:43.390 "assigned_rate_limits": { 00:05:43.390 "rw_ios_per_sec": 0, 00:05:43.390 "rw_mbytes_per_sec": 0, 00:05:43.390 "r_mbytes_per_sec": 0, 00:05:43.390 "w_mbytes_per_sec": 0 00:05:43.390 }, 00:05:43.390 "claimed": false, 00:05:43.390 "zoned": false, 00:05:43.390 "supported_io_types": { 00:05:43.390 "read": true, 00:05:43.390 "write": true, 00:05:43.390 "unmap": true, 00:05:43.390 "write_zeroes": true, 00:05:43.390 "flush": true, 00:05:43.390 "reset": true, 00:05:43.390 "compare": false, 00:05:43.390 "compare_and_write": false, 00:05:43.390 "abort": true, 00:05:43.390 "nvme_admin": false, 00:05:43.390 "nvme_io": false 00:05:43.390 }, 00:05:43.390 "memory_domains": [ 00:05:43.390 { 00:05:43.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.390 "dma_device_type": 2 00:05:43.390 } 00:05:43.390 ], 00:05:43.390 "driver_specific": { 00:05:43.390 "passthru": { 00:05:43.390 "name": "Passthru0", 00:05:43.390 "base_bdev_name": "Malloc0" 00:05:43.390 } 00:05:43.390 } 00:05:43.390 } 00:05:43.390 ]' 00:05:43.390 17:15:51 -- rpc/rpc.sh@21 -- # jq length 00:05:43.390 17:15:51 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.390 17:15:51 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.390 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.390 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.390 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.390 17:15:51 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:43.390 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.390 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.390 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.390 17:15:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.390 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.390 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.390 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.390 17:15:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.390 17:15:51 -- rpc/rpc.sh@26 -- # jq length 00:05:43.390 17:15:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.390 00:05:43.390 real 0m0.297s 00:05:43.390 user 0m0.183s 00:05:43.390 sys 0m0.047s 00:05:43.390 17:15:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.390 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.390 ************************************ 00:05:43.390 END TEST rpc_integrity 00:05:43.390 ************************************ 00:05:43.651 17:15:51 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.651 17:15:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.651 17:15:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.651 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.651 ************************************ 00:05:43.651 START TEST rpc_plugins 00:05:43.651 ************************************ 00:05:43.651 17:15:51 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:43.651 17:15:51 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.651 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.651 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.651 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.651 17:15:51 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.651 17:15:51 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.651 17:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.651 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.651 17:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.651 17:15:51 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.651 { 00:05:43.651 "name": "Malloc1", 00:05:43.651 "aliases": [ 00:05:43.652 "7a0fdf8f-c80e-4dbc-8ac2-8f30345b428f" 00:05:43.652 ], 00:05:43.652 "product_name": "Malloc disk", 00:05:43.652 "block_size": 4096, 00:05:43.652 "num_blocks": 256, 00:05:43.652 "uuid": "7a0fdf8f-c80e-4dbc-8ac2-8f30345b428f", 00:05:43.652 "assigned_rate_limits": { 00:05:43.652 "rw_ios_per_sec": 0, 00:05:43.652 "rw_mbytes_per_sec": 0, 00:05:43.652 "r_mbytes_per_sec": 0, 00:05:43.652 "w_mbytes_per_sec": 0 00:05:43.652 }, 00:05:43.652 "claimed": false, 00:05:43.652 "zoned": false, 00:05:43.652 "supported_io_types": { 00:05:43.652 "read": true, 00:05:43.652 "write": true, 00:05:43.652 "unmap": true, 00:05:43.652 "write_zeroes": true, 00:05:43.652 "flush": true, 00:05:43.652 "reset": true, 00:05:43.652 "compare": false, 00:05:43.652 "compare_and_write": false, 00:05:43.652 "abort": true, 00:05:43.652 "nvme_admin": false, 00:05:43.652 "nvme_io": false 00:05:43.652 }, 00:05:43.652 "memory_domains": [ 00:05:43.652 { 00:05:43.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.652 "dma_device_type": 2 00:05:43.652 } 00:05:43.652 ], 00:05:43.652 "driver_specific": {} 00:05:43.652 } 00:05:43.652 ]' 00:05:43.652 17:15:51 -- rpc/rpc.sh@32 -- # jq length 00:05:43.652 17:15:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.652 17:15:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.652 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.652 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.652 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.652 17:15:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.652 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.652 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.652 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.652 17:15:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.652 17:15:52 -- rpc/rpc.sh@36 -- # jq length 00:05:43.652 17:15:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.652 00:05:43.652 real 0m0.152s 00:05:43.652 user 0m0.089s 00:05:43.652 sys 0m0.024s 00:05:43.652 17:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.652 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.652 ************************************ 00:05:43.652 END TEST rpc_plugins 00:05:43.652 ************************************ 00:05:43.652 17:15:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.652 17:15:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.652 17:15:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.652 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.652 ************************************ 00:05:43.652 START TEST rpc_trace_cmd_test 00:05:43.652 ************************************ 00:05:43.652 17:15:52 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:43.652 17:15:52 -- rpc/rpc.sh@40 -- # local info 00:05:43.652 17:15:52 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.652 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.652 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.652 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.652 17:15:52 -- rpc/rpc.sh@42 -- # info='{ 00:05:43.652 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2972697", 00:05:43.652 "tpoint_group_mask": "0x8", 00:05:43.652 "iscsi_conn": { 00:05:43.652 "mask": "0x2", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "scsi": { 00:05:43.652 "mask": "0x4", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "bdev": { 00:05:43.652 "mask": "0x8", 00:05:43.652 "tpoint_mask": "0xffffffffffffffff" 00:05:43.652 }, 00:05:43.652 "nvmf_rdma": { 00:05:43.652 "mask": "0x10", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "nvmf_tcp": { 00:05:43.652 "mask": "0x20", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "ftl": { 00:05:43.652 "mask": "0x40", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "blobfs": { 00:05:43.652 "mask": "0x80", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "dsa": { 00:05:43.652 "mask": "0x200", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "thread": { 00:05:43.652 "mask": "0x400", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "nvme_pcie": { 00:05:43.652 "mask": "0x800", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "iaa": { 00:05:43.652 "mask": "0x1000", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "nvme_tcp": { 00:05:43.652 "mask": "0x2000", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 }, 00:05:43.652 "bdev_nvme": { 00:05:43.652 "mask": "0x4000", 00:05:43.652 "tpoint_mask": "0x0" 00:05:43.652 } 00:05:43.652 }' 00:05:43.652 17:15:52 -- rpc/rpc.sh@43 -- # jq length 00:05:43.914 17:15:52 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:43.914 17:15:52 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.914 17:15:52 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.914 17:15:52 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.914 17:15:52 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.914 17:15:52 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.914 17:15:52 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.914 17:15:52 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.914 17:15:52 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.914 00:05:43.914 real 0m0.231s 00:05:43.914 user 0m0.194s 00:05:43.914 sys 0m0.028s 00:05:43.914 17:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.914 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.914 ************************************ 00:05:43.914 END TEST rpc_trace_cmd_test 00:05:43.914 ************************************ 00:05:43.914 17:15:52 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.914 17:15:52 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.914 17:15:52 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.914 17:15:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.914 17:15:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.914 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.914 ************************************ 00:05:43.914 START TEST rpc_daemon_integrity 00:05:43.914 ************************************ 00:05:43.914 17:15:52 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:43.914 17:15:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.914 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.914 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.914 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.914 17:15:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.914 17:15:52 -- rpc/rpc.sh@13 -- # jq length 00:05:44.176 17:15:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.176 17:15:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.176 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.176 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:44.177 17:15:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.177 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.177 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.177 { 00:05:44.177 "name": "Malloc2", 00:05:44.177 "aliases": [ 00:05:44.177 "95f34414-5043-4b97-912f-9c0244f99b1d" 00:05:44.177 ], 00:05:44.177 "product_name": "Malloc disk", 00:05:44.177 "block_size": 512, 00:05:44.177 "num_blocks": 16384, 00:05:44.177 "uuid": "95f34414-5043-4b97-912f-9c0244f99b1d", 00:05:44.177 "assigned_rate_limits": { 00:05:44.177 "rw_ios_per_sec": 0, 00:05:44.177 "rw_mbytes_per_sec": 0, 00:05:44.177 "r_mbytes_per_sec": 0, 00:05:44.177 "w_mbytes_per_sec": 0 00:05:44.177 }, 00:05:44.177 "claimed": false, 00:05:44.177 "zoned": false, 00:05:44.177 "supported_io_types": { 00:05:44.177 "read": true, 00:05:44.177 "write": true, 00:05:44.177 "unmap": true, 00:05:44.177 "write_zeroes": true, 00:05:44.177 "flush": true, 00:05:44.177 "reset": true, 00:05:44.177 "compare": false, 00:05:44.177 "compare_and_write": false, 00:05:44.177 "abort": true, 00:05:44.177 "nvme_admin": false, 00:05:44.177 "nvme_io": false 00:05:44.177 }, 00:05:44.177 "memory_domains": [ 00:05:44.177 { 00:05:44.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.177 "dma_device_type": 2 00:05:44.177 } 00:05:44.177 ], 00:05:44.177 "driver_specific": {} 00:05:44.177 } 00:05:44.177 ]' 00:05:44.177 17:15:52 -- rpc/rpc.sh@17 -- # jq length 00:05:44.177 17:15:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.177 17:15:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:44.177 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.177 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 [2024-10-13 17:15:52.550642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:44.177 [2024-10-13 17:15:52.550686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.177 [2024-10-13 17:15:52.550703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26b9100 00:05:44.177 [2024-10-13 17:15:52.550711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.177 [2024-10-13 17:15:52.552086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.177 [2024-10-13 17:15:52.552121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.177 Passthru0 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.177 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.177 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.177 { 00:05:44.177 "name": "Malloc2", 00:05:44.177 "aliases": [ 00:05:44.177 "95f34414-5043-4b97-912f-9c0244f99b1d" 00:05:44.177 ], 00:05:44.177 "product_name": "Malloc disk", 00:05:44.177 "block_size": 512, 00:05:44.177 "num_blocks": 16384, 00:05:44.177 "uuid": "95f34414-5043-4b97-912f-9c0244f99b1d", 00:05:44.177 "assigned_rate_limits": { 00:05:44.177 "rw_ios_per_sec": 0, 00:05:44.177 "rw_mbytes_per_sec": 0, 00:05:44.177 "r_mbytes_per_sec": 0, 00:05:44.177 "w_mbytes_per_sec": 0 00:05:44.177 }, 00:05:44.177 "claimed": true, 00:05:44.177 "claim_type": "exclusive_write", 00:05:44.177 "zoned": false, 00:05:44.177 "supported_io_types": { 00:05:44.177 "read": true, 00:05:44.177 "write": true, 00:05:44.177 "unmap": true, 00:05:44.177 "write_zeroes": true, 00:05:44.177 "flush": true, 00:05:44.177 "reset": true, 00:05:44.177 "compare": false, 00:05:44.177 "compare_and_write": false, 00:05:44.177 "abort": true, 00:05:44.177 "nvme_admin": false, 00:05:44.177 "nvme_io": false 00:05:44.177 }, 00:05:44.177 "memory_domains": [ 00:05:44.177 { 00:05:44.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.177 "dma_device_type": 2 00:05:44.177 } 00:05:44.177 ], 00:05:44.177 "driver_specific": {} 00:05:44.177 }, 00:05:44.177 { 00:05:44.177 "name": "Passthru0", 00:05:44.177 "aliases": [ 00:05:44.177 "ec454459-45db-5853-8a2a-acc75af21afb" 00:05:44.177 ], 00:05:44.177 "product_name": "passthru", 00:05:44.177 "block_size": 512, 00:05:44.177 "num_blocks": 16384, 00:05:44.177 "uuid": "ec454459-45db-5853-8a2a-acc75af21afb", 00:05:44.177 "assigned_rate_limits": { 00:05:44.177 "rw_ios_per_sec": 0, 00:05:44.177 "rw_mbytes_per_sec": 0, 00:05:44.177 "r_mbytes_per_sec": 0, 00:05:44.177 "w_mbytes_per_sec": 0 00:05:44.177 }, 00:05:44.177 "claimed": false, 00:05:44.177 "zoned": false, 00:05:44.177 "supported_io_types": { 00:05:44.177 "read": true, 00:05:44.177 "write": true, 00:05:44.177 "unmap": true, 00:05:44.177 "write_zeroes": true, 00:05:44.177 "flush": true, 00:05:44.177 "reset": true, 00:05:44.177 "compare": false, 00:05:44.177 "compare_and_write": false, 00:05:44.177 "abort": true, 00:05:44.177 "nvme_admin": false, 00:05:44.177 "nvme_io": false 00:05:44.177 }, 00:05:44.177 "memory_domains": [ 00:05:44.177 { 00:05:44.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.177 "dma_device_type": 2 00:05:44.177 } 00:05:44.177 ], 00:05:44.177 "driver_specific": { 00:05:44.177 "passthru": { 00:05:44.177 "name": "Passthru0", 00:05:44.177 "base_bdev_name": "Malloc2" 00:05:44.177 } 00:05:44.177 } 00:05:44.177 } 00:05:44.177 ]' 00:05:44.177 17:15:52 -- rpc/rpc.sh@21 -- # jq length 00:05:44.177 17:15:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.177 17:15:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.177 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.177 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.177 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.177 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.177 17:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.177 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 17:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.177 17:15:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.177 17:15:52 -- rpc/rpc.sh@26 -- # jq length 00:05:44.438 17:15:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.438 00:05:44.438 real 0m0.291s 00:05:44.439 user 0m0.194s 00:05:44.439 sys 0m0.038s 00:05:44.439 17:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.439 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.439 ************************************ 00:05:44.439 END TEST rpc_daemon_integrity 00:05:44.439 ************************************ 00:05:44.439 17:15:52 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:44.439 17:15:52 -- rpc/rpc.sh@84 -- # killprocess 2972697 00:05:44.439 17:15:52 -- common/autotest_common.sh@926 -- # '[' -z 2972697 ']' 00:05:44.439 17:15:52 -- common/autotest_common.sh@930 -- # kill -0 2972697 00:05:44.439 17:15:52 -- common/autotest_common.sh@931 -- # uname 00:05:44.439 17:15:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.439 17:15:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2972697 00:05:44.439 17:15:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.439 17:15:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.439 17:15:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2972697' 00:05:44.439 killing process with pid 2972697 00:05:44.439 17:15:52 -- common/autotest_common.sh@945 -- # kill 2972697 00:05:44.439 17:15:52 -- common/autotest_common.sh@950 -- # wait 2972697 00:05:44.701 00:05:44.701 real 0m2.426s 00:05:44.701 user 0m3.133s 00:05:44.701 sys 0m0.724s 00:05:44.701 17:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.701 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 ************************************ 00:05:44.701 END TEST rpc 00:05:44.701 ************************************ 00:05:44.701 17:15:53 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.701 17:15:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.701 17:15:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.701 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 ************************************ 00:05:44.701 START TEST rpc_client 00:05:44.701 ************************************ 00:05:44.701 17:15:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.701 * Looking for test storage... 00:05:44.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:44.701 17:15:53 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:44.701 OK 00:05:44.701 17:15:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.701 00:05:44.701 real 0m0.123s 00:05:44.701 user 0m0.045s 00:05:44.701 sys 0m0.086s 00:05:44.701 17:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.701 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 ************************************ 00:05:44.701 END TEST rpc_client 00:05:44.701 ************************************ 00:05:44.962 17:15:53 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.962 17:15:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.962 17:15:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.962 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.962 ************************************ 00:05:44.962 START TEST json_config 00:05:44.962 ************************************ 00:05:44.962 17:15:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.962 17:15:53 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.962 17:15:53 -- nvmf/common.sh@7 -- # uname -s 00:05:44.962 17:15:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.963 17:15:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.963 17:15:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.963 17:15:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.963 17:15:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.963 17:15:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.963 17:15:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.963 17:15:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.963 17:15:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.963 17:15:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.963 17:15:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:44.963 17:15:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:44.963 17:15:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.963 17:15:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.963 17:15:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.963 17:15:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.963 17:15:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.963 17:15:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.963 17:15:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.963 17:15:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.963 17:15:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.963 17:15:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.963 17:15:53 -- paths/export.sh@5 -- # export PATH 00:05:44.963 17:15:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.963 17:15:53 -- nvmf/common.sh@46 -- # : 0 00:05:44.963 17:15:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:44.963 17:15:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:44.963 17:15:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:44.963 17:15:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.963 17:15:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.963 17:15:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:44.963 17:15:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:44.963 17:15:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:44.963 17:15:53 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.963 17:15:53 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:44.963 17:15:53 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:44.963 17:15:53 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:44.963 17:15:53 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:44.963 17:15:53 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:44.963 17:15:53 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:44.963 17:15:53 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:44.963 17:15:53 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:44.963 17:15:53 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:44.963 17:15:53 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.963 17:15:53 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:44.963 INFO: JSON configuration test init 00:05:44.963 17:15:53 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:44.963 17:15:53 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:44.963 17:15:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.963 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.963 17:15:53 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:44.963 17:15:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.963 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.963 17:15:53 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:44.963 17:15:53 -- json_config/json_config.sh@98 -- # local app=target 00:05:44.963 17:15:53 -- json_config/json_config.sh@99 -- # shift 00:05:44.963 17:15:53 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:44.963 17:15:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:44.963 17:15:53 -- json_config/json_config.sh@111 -- # app_pid[$app]=2973343 00:05:44.963 17:15:53 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:44.963 Waiting for target to run... 00:05:44.963 17:15:53 -- json_config/json_config.sh@114 -- # waitforlisten 2973343 /var/tmp/spdk_tgt.sock 00:05:44.963 17:15:53 -- common/autotest_common.sh@819 -- # '[' -z 2973343 ']' 00:05:44.963 17:15:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.963 17:15:53 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:44.963 17:15:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.963 17:15:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.963 17:15:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.963 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.963 [2024-10-13 17:15:53.435894] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:44.963 [2024-10-13 17:15:53.435971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973343 ] 00:05:44.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.534 [2024-10-13 17:15:53.878989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.535 [2024-10-13 17:15:53.908885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.535 [2024-10-13 17:15:53.909075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.796 17:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.796 17:15:54 -- common/autotest_common.sh@852 -- # return 0 00:05:45.796 17:15:54 -- json_config/json_config.sh@115 -- # echo '' 00:05:45.796 00:05:45.796 17:15:54 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:45.796 17:15:54 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:45.796 17:15:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.796 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.796 17:15:54 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:45.796 17:15:54 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:45.796 17:15:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.796 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.796 17:15:54 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:45.796 17:15:54 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:45.796 17:15:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:46.368 17:15:54 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:46.368 17:15:54 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:46.368 17:15:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.368 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:46.368 17:15:54 -- json_config/json_config.sh@48 -- # local ret=0 00:05:46.368 17:15:54 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:46.368 17:15:54 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:46.368 17:15:54 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:46.368 17:15:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:46.368 17:15:54 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:46.629 17:15:55 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:46.629 17:15:55 -- json_config/json_config.sh@51 -- # local get_types 00:05:46.629 17:15:55 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:46.629 17:15:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.629 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.629 17:15:55 -- json_config/json_config.sh@58 -- # return 0 00:05:46.629 17:15:55 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:46.629 17:15:55 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:46.629 17:15:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.629 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.629 17:15:55 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:46.629 17:15:55 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:46.629 17:15:55 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.629 17:15:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.890 MallocForNvmf0 00:05:46.890 17:15:55 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.890 17:15:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:47.151 MallocForNvmf1 00:05:47.151 17:15:55 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:47.151 17:15:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:47.151 [2024-10-13 17:15:55.579213] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.151 17:15:55 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.151 17:15:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.412 17:15:55 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.412 17:15:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.673 17:15:55 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.673 17:15:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.673 17:15:56 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.673 17:15:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.933 [2024-10-13 17:15:56.253596] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.933 17:15:56 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:47.933 17:15:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.933 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:05:47.933 17:15:56 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:47.933 17:15:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.933 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:05:47.933 17:15:56 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:47.933 17:15:56 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.933 17:15:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:48.193 MallocBdevForConfigChangeCheck 00:05:48.193 17:15:56 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:48.193 17:15:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:48.193 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.193 17:15:56 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:48.193 17:15:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.454 17:15:56 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:48.454 INFO: shutting down applications... 00:05:48.454 17:15:56 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:48.454 17:15:56 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:48.454 17:15:56 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:48.454 17:15:56 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:49.026 Calling clear_iscsi_subsystem 00:05:49.026 Calling clear_nvmf_subsystem 00:05:49.026 Calling clear_nbd_subsystem 00:05:49.026 Calling clear_ublk_subsystem 00:05:49.026 Calling clear_vhost_blk_subsystem 00:05:49.026 Calling clear_vhost_scsi_subsystem 00:05:49.026 Calling clear_scheduler_subsystem 00:05:49.026 Calling clear_bdev_subsystem 00:05:49.026 Calling clear_accel_subsystem 00:05:49.026 Calling clear_vmd_subsystem 00:05:49.026 Calling clear_sock_subsystem 00:05:49.026 Calling clear_iobuf_subsystem 00:05:49.026 17:15:57 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:49.026 17:15:57 -- json_config/json_config.sh@396 -- # count=100 00:05:49.026 17:15:57 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:49.026 17:15:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.026 17:15:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:49.026 17:15:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:49.286 17:15:57 -- json_config/json_config.sh@398 -- # break 00:05:49.286 17:15:57 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:49.286 17:15:57 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:49.286 17:15:57 -- json_config/json_config.sh@120 -- # local app=target 00:05:49.286 17:15:57 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:49.286 17:15:57 -- json_config/json_config.sh@124 -- # [[ -n 2973343 ]] 00:05:49.286 17:15:57 -- json_config/json_config.sh@127 -- # kill -SIGINT 2973343 00:05:49.286 17:15:57 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:49.286 17:15:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:49.286 17:15:57 -- json_config/json_config.sh@130 -- # kill -0 2973343 00:05:49.286 17:15:57 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:49.859 17:15:58 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:49.859 17:15:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:49.859 17:15:58 -- json_config/json_config.sh@130 -- # kill -0 2973343 00:05:49.859 17:15:58 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:49.859 17:15:58 -- json_config/json_config.sh@132 -- # break 00:05:49.859 17:15:58 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:49.859 17:15:58 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:49.859 SPDK target shutdown done 00:05:49.859 17:15:58 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:49.859 INFO: relaunching applications... 00:05:49.859 17:15:58 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.859 17:15:58 -- json_config/json_config.sh@98 -- # local app=target 00:05:49.859 17:15:58 -- json_config/json_config.sh@99 -- # shift 00:05:49.859 17:15:58 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:49.859 17:15:58 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:49.859 17:15:58 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:49.859 17:15:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:49.859 17:15:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:49.859 17:15:58 -- json_config/json_config.sh@111 -- # app_pid[$app]=2974393 00:05:49.859 17:15:58 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:49.859 Waiting for target to run... 00:05:49.859 17:15:58 -- json_config/json_config.sh@114 -- # waitforlisten 2974393 /var/tmp/spdk_tgt.sock 00:05:49.859 17:15:58 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.859 17:15:58 -- common/autotest_common.sh@819 -- # '[' -z 2974393 ']' 00:05:49.859 17:15:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.859 17:15:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.859 17:15:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.859 17:15:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.859 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.859 [2024-10-13 17:15:58.194796] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:49.859 [2024-10-13 17:15:58.194853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974393 ] 00:05:49.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.119 [2024-10-13 17:15:58.505198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.119 [2024-10-13 17:15:58.527972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.119 [2024-10-13 17:15:58.528109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.689 [2024-10-13 17:15:58.984911] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.689 [2024-10-13 17:15:59.017378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:51.260 17:15:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.260 17:15:59 -- common/autotest_common.sh@852 -- # return 0 00:05:51.260 17:15:59 -- json_config/json_config.sh@115 -- # echo '' 00:05:51.260 00:05:51.260 17:15:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:51.260 17:15:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:51.260 INFO: Checking if target configuration is the same... 00:05:51.260 17:15:59 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.260 17:15:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:51.260 17:15:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.260 + '[' 2 -ne 2 ']' 00:05:51.260 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:51.260 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:51.260 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.260 +++ basename /dev/fd/62 00:05:51.260 ++ mktemp /tmp/62.XXX 00:05:51.260 + tmp_file_1=/tmp/62.LHj 00:05:51.260 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.260 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:51.260 + tmp_file_2=/tmp/spdk_tgt_config.json.vr3 00:05:51.260 + ret=0 00:05:51.260 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.521 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.521 + diff -u /tmp/62.LHj /tmp/spdk_tgt_config.json.vr3 00:05:51.521 + echo 'INFO: JSON config files are the same' 00:05:51.521 INFO: JSON config files are the same 00:05:51.521 + rm /tmp/62.LHj /tmp/spdk_tgt_config.json.vr3 00:05:51.521 + exit 0 00:05:51.521 17:15:59 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:51.521 17:15:59 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:51.521 INFO: changing configuration and checking if this can be detected... 00:05:51.521 17:15:59 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.521 17:15:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.782 17:16:00 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.782 17:16:00 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:51.782 17:16:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.782 + '[' 2 -ne 2 ']' 00:05:51.782 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:51.782 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:51.782 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.782 +++ basename /dev/fd/62 00:05:51.782 ++ mktemp /tmp/62.XXX 00:05:51.782 + tmp_file_1=/tmp/62.Yxz 00:05:51.782 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.782 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:51.782 + tmp_file_2=/tmp/spdk_tgt_config.json.htu 00:05:51.782 + ret=0 00:05:51.782 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.043 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.043 + diff -u /tmp/62.Yxz /tmp/spdk_tgt_config.json.htu 00:05:52.043 + ret=1 00:05:52.043 + echo '=== Start of file: /tmp/62.Yxz ===' 00:05:52.043 + cat /tmp/62.Yxz 00:05:52.043 + echo '=== End of file: /tmp/62.Yxz ===' 00:05:52.043 + echo '' 00:05:52.043 + echo '=== Start of file: /tmp/spdk_tgt_config.json.htu ===' 00:05:52.043 + cat /tmp/spdk_tgt_config.json.htu 00:05:52.043 + echo '=== End of file: /tmp/spdk_tgt_config.json.htu ===' 00:05:52.043 + echo '' 00:05:52.043 + rm /tmp/62.Yxz /tmp/spdk_tgt_config.json.htu 00:05:52.043 + exit 1 00:05:52.043 17:16:00 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:52.043 INFO: configuration change detected. 00:05:52.043 17:16:00 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:52.043 17:16:00 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:52.043 17:16:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.043 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.043 17:16:00 -- json_config/json_config.sh@360 -- # local ret=0 00:05:52.043 17:16:00 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:52.043 17:16:00 -- json_config/json_config.sh@370 -- # [[ -n 2974393 ]] 00:05:52.043 17:16:00 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:52.043 17:16:00 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:52.043 17:16:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.043 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.043 17:16:00 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:52.043 17:16:00 -- json_config/json_config.sh@246 -- # uname -s 00:05:52.043 17:16:00 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:52.043 17:16:00 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:52.043 17:16:00 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:52.043 17:16:00 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:52.043 17:16:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.043 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.303 17:16:00 -- json_config/json_config.sh@376 -- # killprocess 2974393 00:05:52.303 17:16:00 -- common/autotest_common.sh@926 -- # '[' -z 2974393 ']' 00:05:52.303 17:16:00 -- common/autotest_common.sh@930 -- # kill -0 2974393 00:05:52.303 17:16:00 -- common/autotest_common.sh@931 -- # uname 00:05:52.303 17:16:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.303 17:16:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2974393 00:05:52.303 17:16:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.303 17:16:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.303 17:16:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2974393' 00:05:52.303 killing process with pid 2974393 00:05:52.303 17:16:00 -- common/autotest_common.sh@945 -- # kill 2974393 00:05:52.303 17:16:00 -- common/autotest_common.sh@950 -- # wait 2974393 00:05:52.564 17:16:00 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.564 17:16:00 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:52.564 17:16:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.564 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.564 17:16:00 -- json_config/json_config.sh@381 -- # return 0 00:05:52.564 17:16:00 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:52.564 INFO: Success 00:05:52.564 00:05:52.564 real 0m7.690s 00:05:52.564 user 0m9.328s 00:05:52.564 sys 0m1.971s 00:05:52.564 17:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.564 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.564 ************************************ 00:05:52.564 END TEST json_config 00:05:52.564 ************************************ 00:05:52.564 17:16:00 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:52.564 17:16:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.564 17:16:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.564 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.564 ************************************ 00:05:52.564 START TEST json_config_extra_key 00:05:52.564 ************************************ 00:05:52.564 17:16:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:52.564 17:16:01 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.564 17:16:01 -- nvmf/common.sh@7 -- # uname -s 00:05:52.564 17:16:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.564 17:16:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.564 17:16:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.564 17:16:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.564 17:16:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.564 17:16:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.564 17:16:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.564 17:16:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.564 17:16:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.564 17:16:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.564 17:16:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:52.564 17:16:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:52.564 17:16:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.564 17:16:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.564 17:16:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.564 17:16:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.564 17:16:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.564 17:16:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.564 17:16:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.564 17:16:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.564 17:16:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.564 17:16:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.564 17:16:01 -- paths/export.sh@5 -- # export PATH 00:05:52.564 17:16:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.564 17:16:01 -- nvmf/common.sh@46 -- # : 0 00:05:52.564 17:16:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:52.564 17:16:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:52.564 17:16:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:52.564 17:16:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.564 17:16:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.564 17:16:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:52.564 17:16:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:52.564 17:16:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:52.825 INFO: launching applications... 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2975180 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:52.825 Waiting for target to run... 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2975180 /var/tmp/spdk_tgt.sock 00:05:52.825 17:16:01 -- common/autotest_common.sh@819 -- # '[' -z 2975180 ']' 00:05:52.825 17:16:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.825 17:16:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.825 17:16:01 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:52.825 17:16:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.825 17:16:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.825 17:16:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.825 [2024-10-13 17:16:01.148917] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:52.825 [2024-10-13 17:16:01.148994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975180 ] 00:05:52.825 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.086 [2024-10-13 17:16:01.418592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.086 [2024-10-13 17:16:01.431928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.086 [2024-10-13 17:16:01.432025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.657 17:16:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.657 17:16:01 -- common/autotest_common.sh@852 -- # return 0 00:05:53.657 17:16:01 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:53.657 00:05:53.657 17:16:01 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:53.657 INFO: shutting down applications... 00:05:53.657 17:16:01 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:53.657 17:16:01 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2975180 ]] 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2975180 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2975180 00:05:53.658 17:16:01 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2975180 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:53.918 SPDK target shutdown done 00:05:53.918 17:16:02 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:53.918 Success 00:05:53.918 00:05:53.918 real 0m1.434s 00:05:53.918 user 0m1.075s 00:05:53.918 sys 0m0.345s 00:05:53.918 17:16:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.918 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.919 ************************************ 00:05:53.919 END TEST json_config_extra_key 00:05:53.919 ************************************ 00:05:54.180 17:16:02 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.180 17:16:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.180 17:16:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.180 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:54.180 ************************************ 00:05:54.180 START TEST alias_rpc 00:05:54.180 ************************************ 00:05:54.180 17:16:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.180 * Looking for test storage... 00:05:54.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:54.180 17:16:02 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.180 17:16:02 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2975567 00:05:54.180 17:16:02 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2975567 00:05:54.180 17:16:02 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.180 17:16:02 -- common/autotest_common.sh@819 -- # '[' -z 2975567 ']' 00:05:54.180 17:16:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.180 17:16:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.180 17:16:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.180 17:16:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.180 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:54.180 [2024-10-13 17:16:02.631973] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:54.180 [2024-10-13 17:16:02.632037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975567 ] 00:05:54.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.440 [2024-10-13 17:16:02.712656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.440 [2024-10-13 17:16:02.741408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.440 [2024-10-13 17:16:02.741509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.009 17:16:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.009 17:16:03 -- common/autotest_common.sh@852 -- # return 0 00:05:55.009 17:16:03 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:55.269 17:16:03 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2975567 00:05:55.269 17:16:03 -- common/autotest_common.sh@926 -- # '[' -z 2975567 ']' 00:05:55.269 17:16:03 -- common/autotest_common.sh@930 -- # kill -0 2975567 00:05:55.269 17:16:03 -- common/autotest_common.sh@931 -- # uname 00:05:55.269 17:16:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.269 17:16:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2975567 00:05:55.270 17:16:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.270 17:16:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.270 17:16:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2975567' 00:05:55.270 killing process with pid 2975567 00:05:55.270 17:16:03 -- common/autotest_common.sh@945 -- # kill 2975567 00:05:55.270 17:16:03 -- common/autotest_common.sh@950 -- # wait 2975567 00:05:55.531 00:05:55.531 real 0m1.339s 00:05:55.531 user 0m1.484s 00:05:55.531 sys 0m0.370s 00:05:55.531 17:16:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.531 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.531 ************************************ 00:05:55.531 END TEST alias_rpc 00:05:55.531 ************************************ 00:05:55.531 17:16:03 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:55.531 17:16:03 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:55.531 17:16:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.531 17:16:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.531 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.531 ************************************ 00:05:55.531 START TEST spdkcli_tcp 00:05:55.531 ************************************ 00:05:55.531 17:16:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:55.531 * Looking for test storage... 00:05:55.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:55.531 17:16:03 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:55.531 17:16:03 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:55.531 17:16:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:55.531 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2975881 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@27 -- # waitforlisten 2975881 00:05:55.531 17:16:03 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.531 17:16:03 -- common/autotest_common.sh@819 -- # '[' -z 2975881 ']' 00:05:55.531 17:16:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.531 17:16:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.531 17:16:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.531 17:16:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.531 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.531 [2024-10-13 17:16:04.021045] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:55.531 [2024-10-13 17:16:04.021133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975881 ] 00:05:55.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.792 [2024-10-13 17:16:04.102296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.792 [2024-10-13 17:16:04.133871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.792 [2024-10-13 17:16:04.134104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.792 [2024-10-13 17:16:04.134134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.362 17:16:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.362 17:16:04 -- common/autotest_common.sh@852 -- # return 0 00:05:56.362 17:16:04 -- spdkcli/tcp.sh@31 -- # socat_pid=2975972 00:05:56.362 17:16:04 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.362 17:16:04 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:56.624 [ 00:05:56.624 "bdev_malloc_delete", 00:05:56.624 "bdev_malloc_create", 00:05:56.624 "bdev_null_resize", 00:05:56.624 "bdev_null_delete", 00:05:56.624 "bdev_null_create", 00:05:56.624 "bdev_nvme_cuse_unregister", 00:05:56.624 "bdev_nvme_cuse_register", 00:05:56.624 "bdev_opal_new_user", 00:05:56.624 "bdev_opal_set_lock_state", 00:05:56.624 "bdev_opal_delete", 00:05:56.624 "bdev_opal_get_info", 00:05:56.624 "bdev_opal_create", 00:05:56.624 "bdev_nvme_opal_revert", 00:05:56.624 "bdev_nvme_opal_init", 00:05:56.624 "bdev_nvme_send_cmd", 00:05:56.624 "bdev_nvme_get_path_iostat", 00:05:56.624 "bdev_nvme_get_mdns_discovery_info", 00:05:56.624 "bdev_nvme_stop_mdns_discovery", 00:05:56.624 "bdev_nvme_start_mdns_discovery", 00:05:56.624 "bdev_nvme_set_multipath_policy", 00:05:56.624 "bdev_nvme_set_preferred_path", 00:05:56.624 "bdev_nvme_get_io_paths", 00:05:56.624 "bdev_nvme_remove_error_injection", 00:05:56.624 "bdev_nvme_add_error_injection", 00:05:56.624 "bdev_nvme_get_discovery_info", 00:05:56.624 "bdev_nvme_stop_discovery", 00:05:56.624 "bdev_nvme_start_discovery", 00:05:56.624 "bdev_nvme_get_controller_health_info", 00:05:56.624 "bdev_nvme_disable_controller", 00:05:56.624 "bdev_nvme_enable_controller", 00:05:56.624 "bdev_nvme_reset_controller", 00:05:56.624 "bdev_nvme_get_transport_statistics", 00:05:56.624 "bdev_nvme_apply_firmware", 00:05:56.624 "bdev_nvme_detach_controller", 00:05:56.624 "bdev_nvme_get_controllers", 00:05:56.624 "bdev_nvme_attach_controller", 00:05:56.624 "bdev_nvme_set_hotplug", 00:05:56.624 "bdev_nvme_set_options", 00:05:56.624 "bdev_passthru_delete", 00:05:56.624 "bdev_passthru_create", 00:05:56.624 "bdev_lvol_grow_lvstore", 00:05:56.624 "bdev_lvol_get_lvols", 00:05:56.624 "bdev_lvol_get_lvstores", 00:05:56.624 "bdev_lvol_delete", 00:05:56.624 "bdev_lvol_set_read_only", 00:05:56.624 "bdev_lvol_resize", 00:05:56.624 "bdev_lvol_decouple_parent", 00:05:56.624 "bdev_lvol_inflate", 00:05:56.624 "bdev_lvol_rename", 00:05:56.624 "bdev_lvol_clone_bdev", 00:05:56.624 "bdev_lvol_clone", 00:05:56.624 "bdev_lvol_snapshot", 00:05:56.624 "bdev_lvol_create", 00:05:56.624 "bdev_lvol_delete_lvstore", 00:05:56.624 "bdev_lvol_rename_lvstore", 00:05:56.624 "bdev_lvol_create_lvstore", 00:05:56.624 "bdev_raid_set_options", 00:05:56.624 "bdev_raid_remove_base_bdev", 00:05:56.624 "bdev_raid_add_base_bdev", 00:05:56.624 "bdev_raid_delete", 00:05:56.624 "bdev_raid_create", 00:05:56.624 "bdev_raid_get_bdevs", 00:05:56.624 "bdev_error_inject_error", 00:05:56.624 "bdev_error_delete", 00:05:56.624 "bdev_error_create", 00:05:56.624 "bdev_split_delete", 00:05:56.624 "bdev_split_create", 00:05:56.624 "bdev_delay_delete", 00:05:56.624 "bdev_delay_create", 00:05:56.624 "bdev_delay_update_latency", 00:05:56.624 "bdev_zone_block_delete", 00:05:56.624 "bdev_zone_block_create", 00:05:56.624 "blobfs_create", 00:05:56.624 "blobfs_detect", 00:05:56.624 "blobfs_set_cache_size", 00:05:56.624 "bdev_aio_delete", 00:05:56.624 "bdev_aio_rescan", 00:05:56.624 "bdev_aio_create", 00:05:56.624 "bdev_ftl_set_property", 00:05:56.624 "bdev_ftl_get_properties", 00:05:56.624 "bdev_ftl_get_stats", 00:05:56.624 "bdev_ftl_unmap", 00:05:56.624 "bdev_ftl_unload", 00:05:56.624 "bdev_ftl_delete", 00:05:56.624 "bdev_ftl_load", 00:05:56.624 "bdev_ftl_create", 00:05:56.624 "bdev_virtio_attach_controller", 00:05:56.624 "bdev_virtio_scsi_get_devices", 00:05:56.624 "bdev_virtio_detach_controller", 00:05:56.624 "bdev_virtio_blk_set_hotplug", 00:05:56.624 "bdev_iscsi_delete", 00:05:56.624 "bdev_iscsi_create", 00:05:56.624 "bdev_iscsi_set_options", 00:05:56.624 "accel_error_inject_error", 00:05:56.624 "ioat_scan_accel_module", 00:05:56.624 "dsa_scan_accel_module", 00:05:56.624 "iaa_scan_accel_module", 00:05:56.624 "vfu_virtio_create_scsi_endpoint", 00:05:56.624 "vfu_virtio_scsi_remove_target", 00:05:56.624 "vfu_virtio_scsi_add_target", 00:05:56.624 "vfu_virtio_create_blk_endpoint", 00:05:56.624 "vfu_virtio_delete_endpoint", 00:05:56.624 "iscsi_set_options", 00:05:56.624 "iscsi_get_auth_groups", 00:05:56.624 "iscsi_auth_group_remove_secret", 00:05:56.624 "iscsi_auth_group_add_secret", 00:05:56.624 "iscsi_delete_auth_group", 00:05:56.624 "iscsi_create_auth_group", 00:05:56.624 "iscsi_set_discovery_auth", 00:05:56.624 "iscsi_get_options", 00:05:56.624 "iscsi_target_node_request_logout", 00:05:56.624 "iscsi_target_node_set_redirect", 00:05:56.624 "iscsi_target_node_set_auth", 00:05:56.624 "iscsi_target_node_add_lun", 00:05:56.624 "iscsi_get_connections", 00:05:56.624 "iscsi_portal_group_set_auth", 00:05:56.624 "iscsi_start_portal_group", 00:05:56.624 "iscsi_delete_portal_group", 00:05:56.624 "iscsi_create_portal_group", 00:05:56.624 "iscsi_get_portal_groups", 00:05:56.624 "iscsi_delete_target_node", 00:05:56.624 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.624 "iscsi_target_node_add_pg_ig_maps", 00:05:56.624 "iscsi_create_target_node", 00:05:56.624 "iscsi_get_target_nodes", 00:05:56.624 "iscsi_delete_initiator_group", 00:05:56.624 "iscsi_initiator_group_remove_initiators", 00:05:56.624 "iscsi_initiator_group_add_initiators", 00:05:56.624 "iscsi_create_initiator_group", 00:05:56.624 "iscsi_get_initiator_groups", 00:05:56.624 "nvmf_set_crdt", 00:05:56.624 "nvmf_set_config", 00:05:56.624 "nvmf_set_max_subsystems", 00:05:56.624 "nvmf_subsystem_get_listeners", 00:05:56.624 "nvmf_subsystem_get_qpairs", 00:05:56.624 "nvmf_subsystem_get_controllers", 00:05:56.624 "nvmf_get_stats", 00:05:56.624 "nvmf_get_transports", 00:05:56.624 "nvmf_create_transport", 00:05:56.624 "nvmf_get_targets", 00:05:56.624 "nvmf_delete_target", 00:05:56.624 "nvmf_create_target", 00:05:56.624 "nvmf_subsystem_allow_any_host", 00:05:56.624 "nvmf_subsystem_remove_host", 00:05:56.624 "nvmf_subsystem_add_host", 00:05:56.624 "nvmf_subsystem_remove_ns", 00:05:56.624 "nvmf_subsystem_add_ns", 00:05:56.624 "nvmf_subsystem_listener_set_ana_state", 00:05:56.624 "nvmf_discovery_get_referrals", 00:05:56.624 "nvmf_discovery_remove_referral", 00:05:56.624 "nvmf_discovery_add_referral", 00:05:56.624 "nvmf_subsystem_remove_listener", 00:05:56.624 "nvmf_subsystem_add_listener", 00:05:56.624 "nvmf_delete_subsystem", 00:05:56.624 "nvmf_create_subsystem", 00:05:56.624 "nvmf_get_subsystems", 00:05:56.624 "env_dpdk_get_mem_stats", 00:05:56.624 "nbd_get_disks", 00:05:56.624 "nbd_stop_disk", 00:05:56.624 "nbd_start_disk", 00:05:56.624 "ublk_recover_disk", 00:05:56.624 "ublk_get_disks", 00:05:56.624 "ublk_stop_disk", 00:05:56.624 "ublk_start_disk", 00:05:56.624 "ublk_destroy_target", 00:05:56.624 "ublk_create_target", 00:05:56.624 "virtio_blk_create_transport", 00:05:56.624 "virtio_blk_get_transports", 00:05:56.624 "vhost_controller_set_coalescing", 00:05:56.624 "vhost_get_controllers", 00:05:56.624 "vhost_delete_controller", 00:05:56.624 "vhost_create_blk_controller", 00:05:56.624 "vhost_scsi_controller_remove_target", 00:05:56.624 "vhost_scsi_controller_add_target", 00:05:56.624 "vhost_start_scsi_controller", 00:05:56.624 "vhost_create_scsi_controller", 00:05:56.624 "thread_set_cpumask", 00:05:56.624 "framework_get_scheduler", 00:05:56.624 "framework_set_scheduler", 00:05:56.624 "framework_get_reactors", 00:05:56.624 "thread_get_io_channels", 00:05:56.624 "thread_get_pollers", 00:05:56.624 "thread_get_stats", 00:05:56.624 "framework_monitor_context_switch", 00:05:56.625 "spdk_kill_instance", 00:05:56.625 "log_enable_timestamps", 00:05:56.625 "log_get_flags", 00:05:56.625 "log_clear_flag", 00:05:56.625 "log_set_flag", 00:05:56.625 "log_get_level", 00:05:56.625 "log_set_level", 00:05:56.625 "log_get_print_level", 00:05:56.625 "log_set_print_level", 00:05:56.625 "framework_enable_cpumask_locks", 00:05:56.625 "framework_disable_cpumask_locks", 00:05:56.625 "framework_wait_init", 00:05:56.625 "framework_start_init", 00:05:56.625 "scsi_get_devices", 00:05:56.625 "bdev_get_histogram", 00:05:56.625 "bdev_enable_histogram", 00:05:56.625 "bdev_set_qos_limit", 00:05:56.625 "bdev_set_qd_sampling_period", 00:05:56.625 "bdev_get_bdevs", 00:05:56.625 "bdev_reset_iostat", 00:05:56.625 "bdev_get_iostat", 00:05:56.625 "bdev_examine", 00:05:56.625 "bdev_wait_for_examine", 00:05:56.625 "bdev_set_options", 00:05:56.625 "notify_get_notifications", 00:05:56.625 "notify_get_types", 00:05:56.625 "accel_get_stats", 00:05:56.625 "accel_set_options", 00:05:56.625 "accel_set_driver", 00:05:56.625 "accel_crypto_key_destroy", 00:05:56.625 "accel_crypto_keys_get", 00:05:56.625 "accel_crypto_key_create", 00:05:56.625 "accel_assign_opc", 00:05:56.625 "accel_get_module_info", 00:05:56.625 "accel_get_opc_assignments", 00:05:56.625 "vmd_rescan", 00:05:56.625 "vmd_remove_device", 00:05:56.625 "vmd_enable", 00:05:56.625 "sock_set_default_impl", 00:05:56.625 "sock_impl_set_options", 00:05:56.625 "sock_impl_get_options", 00:05:56.625 "iobuf_get_stats", 00:05:56.625 "iobuf_set_options", 00:05:56.625 "framework_get_pci_devices", 00:05:56.625 "framework_get_config", 00:05:56.625 "framework_get_subsystems", 00:05:56.625 "vfu_tgt_set_base_path", 00:05:56.625 "trace_get_info", 00:05:56.625 "trace_get_tpoint_group_mask", 00:05:56.625 "trace_disable_tpoint_group", 00:05:56.625 "trace_enable_tpoint_group", 00:05:56.625 "trace_clear_tpoint_mask", 00:05:56.625 "trace_set_tpoint_mask", 00:05:56.625 "spdk_get_version", 00:05:56.625 "rpc_get_methods" 00:05:56.625 ] 00:05:56.625 17:16:04 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.625 17:16:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:56.625 17:16:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.625 17:16:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.625 17:16:05 -- spdkcli/tcp.sh@38 -- # killprocess 2975881 00:05:56.625 17:16:05 -- common/autotest_common.sh@926 -- # '[' -z 2975881 ']' 00:05:56.625 17:16:05 -- common/autotest_common.sh@930 -- # kill -0 2975881 00:05:56.625 17:16:05 -- common/autotest_common.sh@931 -- # uname 00:05:56.625 17:16:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.625 17:16:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2975881 00:05:56.625 17:16:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.625 17:16:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.625 17:16:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2975881' 00:05:56.625 killing process with pid 2975881 00:05:56.625 17:16:05 -- common/autotest_common.sh@945 -- # kill 2975881 00:05:56.625 17:16:05 -- common/autotest_common.sh@950 -- # wait 2975881 00:05:56.886 00:05:56.886 real 0m1.401s 00:05:56.886 user 0m2.654s 00:05:56.886 sys 0m0.427s 00:05:56.886 17:16:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.886 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.886 ************************************ 00:05:56.886 END TEST spdkcli_tcp 00:05:56.886 ************************************ 00:05:56.886 17:16:05 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.886 17:16:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.886 17:16:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.886 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.886 ************************************ 00:05:56.886 START TEST dpdk_mem_utility 00:05:56.886 ************************************ 00:05:56.886 17:16:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.886 * Looking for test storage... 00:05:56.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:56.886 17:16:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:56.886 17:16:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2976157 00:05:56.886 17:16:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2976157 00:05:56.886 17:16:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.886 17:16:05 -- common/autotest_common.sh@819 -- # '[' -z 2976157 ']' 00:05:56.886 17:16:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.886 17:16:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.886 17:16:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.886 17:16:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.886 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 [2024-10-13 17:16:05.461332] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:57.146 [2024-10-13 17:16:05.461416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976157 ] 00:05:57.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.146 [2024-10-13 17:16:05.542474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.146 [2024-10-13 17:16:05.575148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.146 [2024-10-13 17:16:05.575262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.716 17:16:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.716 17:16:06 -- common/autotest_common.sh@852 -- # return 0 00:05:57.716 17:16:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.716 17:16:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.716 17:16:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.716 17:16:06 -- common/autotest_common.sh@10 -- # set +x 00:05:57.716 { 00:05:57.716 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.716 } 00:05:57.716 17:16:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.716 17:16:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:57.977 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:57.977 1 heaps totaling size 814.000000 MiB 00:05:57.977 size: 814.000000 MiB heap id: 0 00:05:57.977 end heaps---------- 00:05:57.977 8 mempools totaling size 598.116089 MiB 00:05:57.977 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.977 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.977 size: 84.521057 MiB name: bdev_io_2976157 00:05:57.977 size: 51.011292 MiB name: evtpool_2976157 00:05:57.977 size: 50.003479 MiB name: msgpool_2976157 00:05:57.977 size: 21.763794 MiB name: PDU_Pool 00:05:57.977 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.977 size: 0.026123 MiB name: Session_Pool 00:05:57.977 end mempools------- 00:05:57.977 6 memzones totaling size 4.142822 MiB 00:05:57.977 size: 1.000366 MiB name: RG_ring_0_2976157 00:05:57.977 size: 1.000366 MiB name: RG_ring_1_2976157 00:05:57.977 size: 1.000366 MiB name: RG_ring_4_2976157 00:05:57.977 size: 1.000366 MiB name: RG_ring_5_2976157 00:05:57.977 size: 0.125366 MiB name: RG_ring_2_2976157 00:05:57.977 size: 0.015991 MiB name: RG_ring_3_2976157 00:05:57.977 end memzones------- 00:05:57.977 17:16:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.977 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:57.977 list of free elements. size: 12.519348 MiB 00:05:57.977 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:57.977 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:57.977 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:57.977 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:57.977 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:57.977 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:57.977 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:57.977 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:57.977 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:57.977 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:57.977 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:57.977 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:57.977 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:57.977 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:57.977 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:57.977 list of standard malloc elements. size: 199.218079 MiB 00:05:57.977 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:57.977 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:57.977 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:57.977 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:57.977 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:57.977 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:57.977 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:57.977 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:57.977 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:57.977 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:57.977 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:57.977 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:57.977 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:57.977 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:57.977 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:57.977 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:57.978 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:57.978 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:57.978 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:57.978 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:57.978 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:57.978 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:57.978 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:57.978 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:57.978 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:57.978 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:57.978 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:57.978 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:57.978 list of memzone associated elements. size: 602.262573 MiB 00:05:57.978 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:57.978 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.978 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:57.978 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.978 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:57.978 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2976157_0 00:05:57.978 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:57.978 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2976157_0 00:05:57.978 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:57.978 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2976157_0 00:05:57.978 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:57.978 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.978 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:57.978 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.978 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:57.978 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2976157 00:05:57.978 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:57.978 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2976157 00:05:57.978 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:57.978 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2976157 00:05:57.978 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:57.978 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.978 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:57.978 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.978 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:57.978 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.978 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:57.978 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.978 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:57.978 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2976157 00:05:57.978 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:57.978 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2976157 00:05:57.978 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:57.978 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2976157 00:05:57.978 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:57.978 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2976157 00:05:57.978 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:57.978 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2976157 00:05:57.978 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:57.978 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.978 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:57.978 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.978 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:57.978 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.978 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:57.978 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2976157 00:05:57.978 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:57.978 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.978 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:57.978 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.978 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:57.978 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2976157 00:05:57.978 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:57.978 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.978 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:57.978 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2976157 00:05:57.978 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:57.978 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2976157 00:05:57.978 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:57.978 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.978 17:16:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.978 17:16:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2976157 00:05:57.978 17:16:06 -- common/autotest_common.sh@926 -- # '[' -z 2976157 ']' 00:05:57.978 17:16:06 -- common/autotest_common.sh@930 -- # kill -0 2976157 00:05:57.978 17:16:06 -- common/autotest_common.sh@931 -- # uname 00:05:57.978 17:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.978 17:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2976157 00:05:57.978 17:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.978 17:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.978 17:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2976157' 00:05:57.978 killing process with pid 2976157 00:05:57.978 17:16:06 -- common/autotest_common.sh@945 -- # kill 2976157 00:05:57.978 17:16:06 -- common/autotest_common.sh@950 -- # wait 2976157 00:05:58.239 00:05:58.239 real 0m1.247s 00:05:58.239 user 0m1.315s 00:05:58.239 sys 0m0.374s 00:05:58.239 17:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.239 17:16:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 ************************************ 00:05:58.239 END TEST dpdk_mem_utility 00:05:58.239 ************************************ 00:05:58.239 17:16:06 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:58.239 17:16:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.239 17:16:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.239 17:16:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 ************************************ 00:05:58.239 START TEST event 00:05:58.239 ************************************ 00:05:58.239 17:16:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:58.239 * Looking for test storage... 00:05:58.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.239 17:16:06 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:58.239 17:16:06 -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.239 17:16:06 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.239 17:16:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:58.239 17:16:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.239 17:16:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 ************************************ 00:05:58.239 START TEST event_perf 00:05:58.239 ************************************ 00:05:58.239 17:16:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.239 Running I/O for 1 seconds...[2024-10-13 17:16:06.725486] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:58.239 [2024-10-13 17:16:06.725601] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976432 ] 00:05:58.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.499 [2024-10-13 17:16:06.813873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.499 [2024-10-13 17:16:06.847196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.499 [2024-10-13 17:16:06.847410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.499 [2024-10-13 17:16:06.847527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.499 [2024-10-13 17:16:06.847527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.440 Running I/O for 1 seconds... 00:05:59.440 lcore 0: 171455 00:05:59.440 lcore 1: 171457 00:05:59.440 lcore 2: 171460 00:05:59.440 lcore 3: 171455 00:05:59.440 done. 00:05:59.440 00:05:59.440 real 0m1.180s 00:05:59.440 user 0m4.079s 00:05:59.440 sys 0m0.098s 00:05:59.440 17:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.440 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 ************************************ 00:05:59.440 END TEST event_perf 00:05:59.440 ************************************ 00:05:59.440 17:16:07 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:59.440 17:16:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:59.440 17:16:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.440 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 ************************************ 00:05:59.440 START TEST event_reactor 00:05:59.440 ************************************ 00:05:59.440 17:16:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:59.440 [2024-10-13 17:16:07.950340] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:59.440 [2024-10-13 17:16:07.950439] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976787 ] 00:05:59.699 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.699 [2024-10-13 17:16:08.031696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.699 [2024-10-13 17:16:08.064123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.640 test_start 00:06:00.640 oneshot 00:06:00.640 tick 100 00:06:00.640 tick 100 00:06:00.640 tick 250 00:06:00.640 tick 100 00:06:00.640 tick 100 00:06:00.640 tick 250 00:06:00.640 tick 500 00:06:00.640 tick 100 00:06:00.640 tick 100 00:06:00.640 tick 100 00:06:00.640 tick 250 00:06:00.640 tick 100 00:06:00.640 tick 100 00:06:00.640 test_end 00:06:00.640 00:06:00.640 real 0m1.169s 00:06:00.640 user 0m1.073s 00:06:00.640 sys 0m0.091s 00:06:00.640 17:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.640 17:16:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.640 ************************************ 00:06:00.640 END TEST event_reactor 00:06:00.640 ************************************ 00:06:00.640 17:16:09 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.640 17:16:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:00.640 17:16:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.640 17:16:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.640 ************************************ 00:06:00.640 START TEST event_reactor_perf 00:06:00.640 ************************************ 00:06:00.640 17:16:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.640 [2024-10-13 17:16:09.162954] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:00.640 [2024-10-13 17:16:09.163050] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977103 ] 00:06:00.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.901 [2024-10-13 17:16:09.246340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.901 [2024-10-13 17:16:09.275675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.981 test_start 00:06:01.981 test_end 00:06:01.981 Performance: 504142 events per second 00:06:01.981 00:06:01.981 real 0m1.169s 00:06:01.981 user 0m1.077s 00:06:01.981 sys 0m0.088s 00:06:01.981 17:16:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.981 17:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.981 ************************************ 00:06:01.981 END TEST event_reactor_perf 00:06:01.981 ************************************ 00:06:01.981 17:16:10 -- event/event.sh@49 -- # uname -s 00:06:01.981 17:16:10 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.981 17:16:10 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.981 17:16:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.981 17:16:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.981 17:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.981 ************************************ 00:06:01.981 START TEST event_scheduler 00:06:01.981 ************************************ 00:06:01.981 17:16:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.981 * Looking for test storage... 00:06:01.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:01.981 17:16:10 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:01.981 17:16:10 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2977288 00:06:01.981 17:16:10 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.981 17:16:10 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:01.981 17:16:10 -- scheduler/scheduler.sh@37 -- # waitforlisten 2977288 00:06:01.981 17:16:10 -- common/autotest_common.sh@819 -- # '[' -z 2977288 ']' 00:06:01.981 17:16:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.981 17:16:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.981 17:16:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.981 17:16:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.981 17:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.981 [2024-10-13 17:16:10.503166] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:01.981 [2024-10-13 17:16:10.503239] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977288 ] 00:06:02.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.269 [2024-10-13 17:16:10.587804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.269 [2024-10-13 17:16:10.638729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.269 [2024-10-13 17:16:10.638893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.269 [2024-10-13 17:16:10.639053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.269 [2024-10-13 17:16:10.639054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.839 17:16:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.839 17:16:11 -- common/autotest_common.sh@852 -- # return 0 00:06:02.839 17:16:11 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:02.839 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.839 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.839 POWER: Env isn't set yet! 00:06:02.839 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:02.839 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.839 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.839 POWER: Attempting to initialise PSTAT power management... 00:06:02.839 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:02.839 POWER: Initialized successfully for lcore 0 power management 00:06:02.839 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:02.839 POWER: Initialized successfully for lcore 1 power management 00:06:02.839 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:02.839 POWER: Initialized successfully for lcore 2 power management 00:06:02.839 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:02.839 POWER: Initialized successfully for lcore 3 power management 00:06:02.839 [2024-10-13 17:16:11.339449] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.839 [2024-10-13 17:16:11.339462] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.839 [2024-10-13 17:16:11.339468] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.839 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.839 17:16:11 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.839 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.839 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 [2024-10-13 17:16:11.389771] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:03.100 17:16:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.100 17:16:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 ************************************ 00:06:03.100 START TEST scheduler_create_thread 00:06:03.100 ************************************ 00:06:03.100 17:16:11 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 2 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 3 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 4 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 5 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 6 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 7 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 8 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.100 9 00:06:03.100 17:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.100 17:16:11 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.100 17:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.100 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.485 10 00:06:04.485 17:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.485 17:16:12 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.485 17:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.485 17:16:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.867 17:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.867 17:16:14 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.867 17:16:14 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.867 17:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.867 17:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 17:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.436 17:16:14 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.436 17:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:06.436 17:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 17:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.377 17:16:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.377 17:16:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.377 17:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.377 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.948 17:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.948 00:06:07.948 real 0m4.797s 00:06:07.948 user 0m0.025s 00:06:07.948 sys 0m0.006s 00:06:07.948 17:16:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.948 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.948 ************************************ 00:06:07.948 END TEST scheduler_create_thread 00:06:07.948 ************************************ 00:06:07.948 17:16:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:07.948 17:16:16 -- scheduler/scheduler.sh@46 -- # killprocess 2977288 00:06:07.948 17:16:16 -- common/autotest_common.sh@926 -- # '[' -z 2977288 ']' 00:06:07.948 17:16:16 -- common/autotest_common.sh@930 -- # kill -0 2977288 00:06:07.948 17:16:16 -- common/autotest_common.sh@931 -- # uname 00:06:07.948 17:16:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.948 17:16:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2977288 00:06:07.948 17:16:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:07.948 17:16:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:07.949 17:16:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2977288' 00:06:07.949 killing process with pid 2977288 00:06:07.949 17:16:16 -- common/autotest_common.sh@945 -- # kill 2977288 00:06:07.949 17:16:16 -- common/autotest_common.sh@950 -- # wait 2977288 00:06:08.209 [2024-10-13 17:16:16.475545] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.210 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:08.210 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:08.210 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:08.210 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:08.210 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:08.210 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:08.210 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:08.210 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:08.210 00:06:08.210 real 0m6.266s 00:06:08.210 user 0m14.086s 00:06:08.210 sys 0m0.369s 00:06:08.210 17:16:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.210 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.210 ************************************ 00:06:08.210 END TEST event_scheduler 00:06:08.210 ************************************ 00:06:08.210 17:16:16 -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.210 17:16:16 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.210 17:16:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.210 17:16:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.210 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.210 ************************************ 00:06:08.210 START TEST app_repeat 00:06:08.210 ************************************ 00:06:08.210 17:16:16 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:08.210 17:16:16 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.210 17:16:16 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.210 17:16:16 -- event/event.sh@13 -- # local nbd_list 00:06:08.210 17:16:16 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.210 17:16:16 -- event/event.sh@14 -- # local bdev_list 00:06:08.210 17:16:16 -- event/event.sh@15 -- # local repeat_times=4 00:06:08.210 17:16:16 -- event/event.sh@17 -- # modprobe nbd 00:06:08.210 17:16:16 -- event/event.sh@19 -- # repeat_pid=2978600 00:06:08.210 17:16:16 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.210 17:16:16 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.210 17:16:16 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2978600' 00:06:08.210 Process app_repeat pid: 2978600 00:06:08.210 17:16:16 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.210 17:16:16 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.210 spdk_app_start Round 0 00:06:08.210 17:16:16 -- event/event.sh@25 -- # waitforlisten 2978600 /var/tmp/spdk-nbd.sock 00:06:08.210 17:16:16 -- common/autotest_common.sh@819 -- # '[' -z 2978600 ']' 00:06:08.210 17:16:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.210 17:16:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.210 17:16:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.210 17:16:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.210 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.210 [2024-10-13 17:16:16.715921] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:08.210 [2024-10-13 17:16:16.715992] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978600 ] 00:06:08.470 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.470 [2024-10-13 17:16:16.779130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.470 [2024-10-13 17:16:16.807770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.470 [2024-10-13 17:16:16.807773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.042 17:16:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.042 17:16:17 -- common/autotest_common.sh@852 -- # return 0 00:06:09.042 17:16:17 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.303 Malloc0 00:06:09.303 17:16:17 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.565 Malloc1 00:06:09.565 17:16:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.565 17:16:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.566 17:16:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.566 17:16:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.566 17:16:17 -- bdev/nbd_common.sh@12 -- # local i 00:06:09.566 17:16:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.566 17:16:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.566 17:16:17 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.566 /dev/nbd0 00:06:09.566 17:16:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.566 17:16:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.566 17:16:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:09.566 17:16:18 -- common/autotest_common.sh@857 -- # local i 00:06:09.566 17:16:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.566 17:16:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.566 17:16:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:09.566 17:16:18 -- common/autotest_common.sh@861 -- # break 00:06:09.566 17:16:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.566 17:16:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.566 17:16:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.566 1+0 records in 00:06:09.566 1+0 records out 00:06:09.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292207 s, 14.0 MB/s 00:06:09.566 17:16:18 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.566 17:16:18 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.566 17:16:18 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.566 17:16:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.566 17:16:18 -- common/autotest_common.sh@877 -- # return 0 00:06:09.566 17:16:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.566 17:16:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.566 17:16:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.828 /dev/nbd1 00:06:09.828 17:16:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.828 17:16:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.828 17:16:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:09.828 17:16:18 -- common/autotest_common.sh@857 -- # local i 00:06:09.828 17:16:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.828 17:16:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.828 17:16:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:09.828 17:16:18 -- common/autotest_common.sh@861 -- # break 00:06:09.828 17:16:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.829 17:16:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.829 17:16:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.829 1+0 records in 00:06:09.829 1+0 records out 00:06:09.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286435 s, 14.3 MB/s 00:06:09.829 17:16:18 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.829 17:16:18 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.829 17:16:18 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.829 17:16:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.829 17:16:18 -- common/autotest_common.sh@877 -- # return 0 00:06:09.829 17:16:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.829 17:16:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.829 17:16:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.829 17:16:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.829 17:16:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.090 { 00:06:10.090 "nbd_device": "/dev/nbd0", 00:06:10.090 "bdev_name": "Malloc0" 00:06:10.090 }, 00:06:10.090 { 00:06:10.090 "nbd_device": "/dev/nbd1", 00:06:10.090 "bdev_name": "Malloc1" 00:06:10.090 } 00:06:10.090 ]' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.090 { 00:06:10.090 "nbd_device": "/dev/nbd0", 00:06:10.090 "bdev_name": "Malloc0" 00:06:10.090 }, 00:06:10.090 { 00:06:10.090 "nbd_device": "/dev/nbd1", 00:06:10.090 "bdev_name": "Malloc1" 00:06:10.090 } 00:06:10.090 ]' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.090 /dev/nbd1' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.090 /dev/nbd1' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.090 256+0 records in 00:06:10.090 256+0 records out 00:06:10.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124887 s, 84.0 MB/s 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.090 256+0 records in 00:06:10.090 256+0 records out 00:06:10.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016456 s, 63.7 MB/s 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.090 256+0 records in 00:06:10.090 256+0 records out 00:06:10.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181879 s, 57.7 MB/s 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@51 -- # local i 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.090 17:16:18 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@41 -- # break 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@41 -- # break 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.352 17:16:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@65 -- # true 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.613 17:16:19 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.613 17:16:19 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.874 17:16:19 -- event/event.sh@35 -- # sleep 3 00:06:10.874 [2024-10-13 17:16:19.366855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.874 [2024-10-13 17:16:19.394456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.874 [2024-10-13 17:16:19.394458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.134 [2024-10-13 17:16:19.426092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.134 [2024-10-13 17:16:19.426130] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.438 17:16:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:14.438 17:16:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.438 spdk_app_start Round 1 00:06:14.438 17:16:22 -- event/event.sh@25 -- # waitforlisten 2978600 /var/tmp/spdk-nbd.sock 00:06:14.438 17:16:22 -- common/autotest_common.sh@819 -- # '[' -z 2978600 ']' 00:06:14.438 17:16:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.438 17:16:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.438 17:16:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.438 17:16:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.438 17:16:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.438 17:16:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.438 17:16:22 -- common/autotest_common.sh@852 -- # return 0 00:06:14.438 17:16:22 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.438 Malloc0 00:06:14.438 17:16:22 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.438 Malloc1 00:06:14.438 17:16:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.438 /dev/nbd0 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.438 17:16:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:14.438 17:16:22 -- common/autotest_common.sh@857 -- # local i 00:06:14.438 17:16:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.438 17:16:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.438 17:16:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:14.438 17:16:22 -- common/autotest_common.sh@861 -- # break 00:06:14.438 17:16:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.438 17:16:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.438 17:16:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.438 1+0 records in 00:06:14.438 1+0 records out 00:06:14.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278002 s, 14.7 MB/s 00:06:14.438 17:16:22 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.438 17:16:22 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.438 17:16:22 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.438 17:16:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.438 17:16:22 -- common/autotest_common.sh@877 -- # return 0 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.438 17:16:22 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.700 /dev/nbd1 00:06:14.700 17:16:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.700 17:16:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.700 17:16:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:14.700 17:16:23 -- common/autotest_common.sh@857 -- # local i 00:06:14.700 17:16:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.700 17:16:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.700 17:16:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:14.700 17:16:23 -- common/autotest_common.sh@861 -- # break 00:06:14.700 17:16:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.700 17:16:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.700 17:16:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.700 1+0 records in 00:06:14.700 1+0 records out 00:06:14.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278921 s, 14.7 MB/s 00:06:14.700 17:16:23 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.700 17:16:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.700 17:16:23 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.700 17:16:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.700 17:16:23 -- common/autotest_common.sh@877 -- # return 0 00:06:14.700 17:16:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.700 17:16:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.700 17:16:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.701 17:16:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.701 17:16:23 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.995 { 00:06:14.995 "nbd_device": "/dev/nbd0", 00:06:14.995 "bdev_name": "Malloc0" 00:06:14.995 }, 00:06:14.995 { 00:06:14.995 "nbd_device": "/dev/nbd1", 00:06:14.995 "bdev_name": "Malloc1" 00:06:14.995 } 00:06:14.995 ]' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.995 { 00:06:14.995 "nbd_device": "/dev/nbd0", 00:06:14.995 "bdev_name": "Malloc0" 00:06:14.995 }, 00:06:14.995 { 00:06:14.995 "nbd_device": "/dev/nbd1", 00:06:14.995 "bdev_name": "Malloc1" 00:06:14.995 } 00:06:14.995 ]' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.995 /dev/nbd1' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.995 /dev/nbd1' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.995 256+0 records in 00:06:14.995 256+0 records out 00:06:14.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127063 s, 82.5 MB/s 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.995 256+0 records in 00:06:14.995 256+0 records out 00:06:14.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016206 s, 64.7 MB/s 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.995 256+0 records in 00:06:14.995 256+0 records out 00:06:14.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176007 s, 59.6 MB/s 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.995 17:16:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.996 17:16:23 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@41 -- # break 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.257 17:16:23 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@41 -- # break 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.518 17:16:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.518 17:16:24 -- bdev/nbd_common.sh@65 -- # true 00:06:15.518 17:16:24 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.518 17:16:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.518 17:16:24 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.518 17:16:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.518 17:16:24 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.518 17:16:24 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.779 17:16:24 -- event/event.sh@35 -- # sleep 3 00:06:15.779 [2024-10-13 17:16:24.288570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.040 [2024-10-13 17:16:24.316132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.040 [2024-10-13 17:16:24.316148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.040 [2024-10-13 17:16:24.347918] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.040 [2024-10-13 17:16:24.347955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.342 17:16:27 -- event/event.sh@23 -- # for i in {0..2} 00:06:19.342 17:16:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.342 spdk_app_start Round 2 00:06:19.342 17:16:27 -- event/event.sh@25 -- # waitforlisten 2978600 /var/tmp/spdk-nbd.sock 00:06:19.342 17:16:27 -- common/autotest_common.sh@819 -- # '[' -z 2978600 ']' 00:06:19.342 17:16:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.342 17:16:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.342 17:16:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.342 17:16:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.342 17:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 17:16:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.342 17:16:27 -- common/autotest_common.sh@852 -- # return 0 00:06:19.342 17:16:27 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.342 Malloc0 00:06:19.342 17:16:27 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.342 Malloc1 00:06:19.342 17:16:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@12 -- # local i 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.342 /dev/nbd0 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.342 17:16:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.342 17:16:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:19.342 17:16:27 -- common/autotest_common.sh@857 -- # local i 00:06:19.342 17:16:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.342 17:16:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.342 17:16:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:19.342 17:16:27 -- common/autotest_common.sh@861 -- # break 00:06:19.342 17:16:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.342 17:16:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.342 17:16:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.342 1+0 records in 00:06:19.342 1+0 records out 00:06:19.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304773 s, 13.4 MB/s 00:06:19.342 17:16:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.342 17:16:27 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.342 17:16:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.342 17:16:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.342 17:16:27 -- common/autotest_common.sh@877 -- # return 0 00:06:19.612 17:16:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.612 17:16:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.612 17:16:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.612 /dev/nbd1 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.612 17:16:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:19.612 17:16:28 -- common/autotest_common.sh@857 -- # local i 00:06:19.612 17:16:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.612 17:16:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.612 17:16:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:19.612 17:16:28 -- common/autotest_common.sh@861 -- # break 00:06:19.612 17:16:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.612 17:16:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.612 17:16:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.612 1+0 records in 00:06:19.612 1+0 records out 00:06:19.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265356 s, 15.4 MB/s 00:06:19.612 17:16:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.612 17:16:28 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.612 17:16:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.612 17:16:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.612 17:16:28 -- common/autotest_common.sh@877 -- # return 0 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.612 17:16:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.882 { 00:06:19.882 "nbd_device": "/dev/nbd0", 00:06:19.882 "bdev_name": "Malloc0" 00:06:19.882 }, 00:06:19.882 { 00:06:19.882 "nbd_device": "/dev/nbd1", 00:06:19.882 "bdev_name": "Malloc1" 00:06:19.882 } 00:06:19.882 ]' 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.882 { 00:06:19.882 "nbd_device": "/dev/nbd0", 00:06:19.882 "bdev_name": "Malloc0" 00:06:19.882 }, 00:06:19.882 { 00:06:19.882 "nbd_device": "/dev/nbd1", 00:06:19.882 "bdev_name": "Malloc1" 00:06:19.882 } 00:06:19.882 ]' 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.882 /dev/nbd1' 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.882 /dev/nbd1' 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.882 17:16:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.883 256+0 records in 00:06:19.883 256+0 records out 00:06:19.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127444 s, 82.3 MB/s 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.883 256+0 records in 00:06:19.883 256+0 records out 00:06:19.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163676 s, 64.1 MB/s 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.883 256+0 records in 00:06:19.883 256+0 records out 00:06:19.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176087 s, 59.5 MB/s 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@51 -- # local i 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.883 17:16:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@41 -- # break 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.143 17:16:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.404 17:16:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.404 17:16:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.404 17:16:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.404 17:16:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.404 17:16:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@41 -- # break 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.405 17:16:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.666 17:16:28 -- bdev/nbd_common.sh@65 -- # true 00:06:20.666 17:16:28 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.666 17:16:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.666 17:16:28 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.666 17:16:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.666 17:16:28 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.666 17:16:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.666 17:16:29 -- event/event.sh@35 -- # sleep 3 00:06:20.926 [2024-10-13 17:16:29.215828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.926 [2024-10-13 17:16:29.243473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.926 [2024-10-13 17:16:29.243475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.926 [2024-10-13 17:16:29.274997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.926 [2024-10-13 17:16:29.275034] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.225 17:16:32 -- event/event.sh@38 -- # waitforlisten 2978600 /var/tmp/spdk-nbd.sock 00:06:24.225 17:16:32 -- common/autotest_common.sh@819 -- # '[' -z 2978600 ']' 00:06:24.225 17:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.225 17:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.225 17:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.225 17:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.225 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.225 17:16:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.225 17:16:32 -- common/autotest_common.sh@852 -- # return 0 00:06:24.225 17:16:32 -- event/event.sh@39 -- # killprocess 2978600 00:06:24.225 17:16:32 -- common/autotest_common.sh@926 -- # '[' -z 2978600 ']' 00:06:24.225 17:16:32 -- common/autotest_common.sh@930 -- # kill -0 2978600 00:06:24.225 17:16:32 -- common/autotest_common.sh@931 -- # uname 00:06:24.225 17:16:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.225 17:16:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2978600 00:06:24.225 17:16:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.225 17:16:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.225 17:16:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2978600' 00:06:24.225 killing process with pid 2978600 00:06:24.225 17:16:32 -- common/autotest_common.sh@945 -- # kill 2978600 00:06:24.225 17:16:32 -- common/autotest_common.sh@950 -- # wait 2978600 00:06:24.225 spdk_app_start is called in Round 0. 00:06:24.225 Shutdown signal received, stop current app iteration 00:06:24.225 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:24.225 spdk_app_start is called in Round 1. 00:06:24.225 Shutdown signal received, stop current app iteration 00:06:24.225 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:24.225 spdk_app_start is called in Round 2. 00:06:24.225 Shutdown signal received, stop current app iteration 00:06:24.226 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:24.226 spdk_app_start is called in Round 3. 00:06:24.226 Shutdown signal received, stop current app iteration 00:06:24.226 17:16:32 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.226 17:16:32 -- event/event.sh@42 -- # return 0 00:06:24.226 00:06:24.226 real 0m15.770s 00:06:24.226 user 0m34.213s 00:06:24.226 sys 0m2.231s 00:06:24.226 17:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.226 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 ************************************ 00:06:24.226 END TEST app_repeat 00:06:24.226 ************************************ 00:06:24.226 17:16:32 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.226 17:16:32 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.226 17:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.226 17:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.226 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 ************************************ 00:06:24.226 START TEST cpu_locks 00:06:24.226 ************************************ 00:06:24.226 17:16:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.226 * Looking for test storage... 00:06:24.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:24.226 17:16:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.226 17:16:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.226 17:16:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.226 17:16:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.226 17:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.226 17:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.226 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 ************************************ 00:06:24.226 START TEST default_locks 00:06:24.226 ************************************ 00:06:24.226 17:16:32 -- common/autotest_common.sh@1104 -- # default_locks 00:06:24.226 17:16:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2982028 00:06:24.226 17:16:32 -- event/cpu_locks.sh@47 -- # waitforlisten 2982028 00:06:24.226 17:16:32 -- common/autotest_common.sh@819 -- # '[' -z 2982028 ']' 00:06:24.226 17:16:32 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.226 17:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.226 17:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.226 17:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.226 17:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.226 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 [2024-10-13 17:16:32.658390] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.226 [2024-10-13 17:16:32.658467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982028 ] 00:06:24.226 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.226 [2024-10-13 17:16:32.727827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.487 [2024-10-13 17:16:32.764648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.487 [2024-10-13 17:16:32.764813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.059 17:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.059 17:16:33 -- common/autotest_common.sh@852 -- # return 0 00:06:25.059 17:16:33 -- event/cpu_locks.sh@49 -- # locks_exist 2982028 00:06:25.059 17:16:33 -- event/cpu_locks.sh@22 -- # lslocks -p 2982028 00:06:25.059 17:16:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.320 lslocks: write error 00:06:25.320 17:16:33 -- event/cpu_locks.sh@50 -- # killprocess 2982028 00:06:25.320 17:16:33 -- common/autotest_common.sh@926 -- # '[' -z 2982028 ']' 00:06:25.320 17:16:33 -- common/autotest_common.sh@930 -- # kill -0 2982028 00:06:25.320 17:16:33 -- common/autotest_common.sh@931 -- # uname 00:06:25.321 17:16:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.321 17:16:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2982028 00:06:25.321 17:16:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.321 17:16:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.321 17:16:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2982028' 00:06:25.321 killing process with pid 2982028 00:06:25.321 17:16:33 -- common/autotest_common.sh@945 -- # kill 2982028 00:06:25.321 17:16:33 -- common/autotest_common.sh@950 -- # wait 2982028 00:06:25.581 17:16:34 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2982028 00:06:25.581 17:16:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.581 17:16:34 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2982028 00:06:25.581 17:16:34 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:25.581 17:16:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.582 17:16:34 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:25.582 17:16:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.582 17:16:34 -- common/autotest_common.sh@643 -- # waitforlisten 2982028 00:06:25.582 17:16:34 -- common/autotest_common.sh@819 -- # '[' -z 2982028 ']' 00:06:25.582 17:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.582 17:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.582 17:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.582 17:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.582 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:25.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2982028) - No such process 00:06:25.582 ERROR: process (pid: 2982028) is no longer running 00:06:25.582 17:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.582 17:16:34 -- common/autotest_common.sh@852 -- # return 1 00:06:25.582 17:16:34 -- common/autotest_common.sh@643 -- # es=1 00:06:25.582 17:16:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:25.582 17:16:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:25.582 17:16:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:25.582 17:16:34 -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.582 17:16:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.582 17:16:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.582 17:16:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.582 00:06:25.582 real 0m1.438s 00:06:25.582 user 0m1.532s 00:06:25.582 sys 0m0.500s 00:06:25.582 17:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.582 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:25.582 ************************************ 00:06:25.582 END TEST default_locks 00:06:25.582 ************************************ 00:06:25.582 17:16:34 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.582 17:16:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.582 17:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.582 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:25.582 ************************************ 00:06:25.582 START TEST default_locks_via_rpc 00:06:25.582 ************************************ 00:06:25.582 17:16:34 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:25.582 17:16:34 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2982278 00:06:25.582 17:16:34 -- event/cpu_locks.sh@63 -- # waitforlisten 2982278 00:06:25.582 17:16:34 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.582 17:16:34 -- common/autotest_common.sh@819 -- # '[' -z 2982278 ']' 00:06:25.582 17:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.582 17:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.582 17:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.582 17:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.582 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:25.842 [2024-10-13 17:16:34.148491] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.842 [2024-10-13 17:16:34.148554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982278 ] 00:06:25.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.842 [2024-10-13 17:16:34.212991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.842 [2024-10-13 17:16:34.244759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.842 [2024-10-13 17:16:34.244909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.784 17:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.784 17:16:34 -- common/autotest_common.sh@852 -- # return 0 00:06:26.784 17:16:34 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:26.784 17:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:26.784 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.784 17:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.784 17:16:34 -- event/cpu_locks.sh@67 -- # no_locks 00:06:26.784 17:16:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.784 17:16:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.784 17:16:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.784 17:16:34 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.784 17:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:26.784 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.784 17:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.784 17:16:34 -- event/cpu_locks.sh@71 -- # locks_exist 2982278 00:06:26.784 17:16:34 -- event/cpu_locks.sh@22 -- # lslocks -p 2982278 00:06:26.784 17:16:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.044 17:16:35 -- event/cpu_locks.sh@73 -- # killprocess 2982278 00:06:27.044 17:16:35 -- common/autotest_common.sh@926 -- # '[' -z 2982278 ']' 00:06:27.044 17:16:35 -- common/autotest_common.sh@930 -- # kill -0 2982278 00:06:27.044 17:16:35 -- common/autotest_common.sh@931 -- # uname 00:06:27.044 17:16:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.044 17:16:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2982278 00:06:27.044 17:16:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.044 17:16:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.044 17:16:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2982278' 00:06:27.044 killing process with pid 2982278 00:06:27.044 17:16:35 -- common/autotest_common.sh@945 -- # kill 2982278 00:06:27.044 17:16:35 -- common/autotest_common.sh@950 -- # wait 2982278 00:06:27.305 00:06:27.305 real 0m1.525s 00:06:27.305 user 0m1.699s 00:06:27.305 sys 0m0.496s 00:06:27.305 17:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.305 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:06:27.305 ************************************ 00:06:27.305 END TEST default_locks_via_rpc 00:06:27.305 ************************************ 00:06:27.305 17:16:35 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.305 17:16:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.305 17:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.305 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:06:27.305 ************************************ 00:06:27.305 START TEST non_locking_app_on_locked_coremask 00:06:27.305 ************************************ 00:06:27.305 17:16:35 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:27.305 17:16:35 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2982628 00:06:27.305 17:16:35 -- event/cpu_locks.sh@81 -- # waitforlisten 2982628 /var/tmp/spdk.sock 00:06:27.305 17:16:35 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.305 17:16:35 -- common/autotest_common.sh@819 -- # '[' -z 2982628 ']' 00:06:27.305 17:16:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.305 17:16:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.305 17:16:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.305 17:16:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.305 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:06:27.305 [2024-10-13 17:16:35.710194] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:27.305 [2024-10-13 17:16:35.710254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982628 ] 00:06:27.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.305 [2024-10-13 17:16:35.773041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.305 [2024-10-13 17:16:35.805369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.305 [2024-10-13 17:16:35.805513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.246 17:16:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.246 17:16:36 -- common/autotest_common.sh@852 -- # return 0 00:06:28.246 17:16:36 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.246 17:16:36 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2982959 00:06:28.246 17:16:36 -- event/cpu_locks.sh@85 -- # waitforlisten 2982959 /var/tmp/spdk2.sock 00:06:28.246 17:16:36 -- common/autotest_common.sh@819 -- # '[' -z 2982959 ']' 00:06:28.246 17:16:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.246 17:16:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.246 17:16:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.246 17:16:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.246 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:06:28.246 [2024-10-13 17:16:36.510726] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:28.246 [2024-10-13 17:16:36.510773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982959 ] 00:06:28.246 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.246 [2024-10-13 17:16:36.598811] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.246 [2024-10-13 17:16:36.598841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.246 [2024-10-13 17:16:36.662971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.246 [2024-10-13 17:16:36.663107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.817 17:16:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.817 17:16:37 -- common/autotest_common.sh@852 -- # return 0 00:06:28.817 17:16:37 -- event/cpu_locks.sh@87 -- # locks_exist 2982628 00:06:28.817 17:16:37 -- event/cpu_locks.sh@22 -- # lslocks -p 2982628 00:06:28.817 17:16:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.390 lslocks: write error 00:06:29.390 17:16:37 -- event/cpu_locks.sh@89 -- # killprocess 2982628 00:06:29.390 17:16:37 -- common/autotest_common.sh@926 -- # '[' -z 2982628 ']' 00:06:29.390 17:16:37 -- common/autotest_common.sh@930 -- # kill -0 2982628 00:06:29.390 17:16:37 -- common/autotest_common.sh@931 -- # uname 00:06:29.390 17:16:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.390 17:16:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2982628 00:06:29.390 17:16:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.390 17:16:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.390 17:16:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2982628' 00:06:29.390 killing process with pid 2982628 00:06:29.390 17:16:37 -- common/autotest_common.sh@945 -- # kill 2982628 00:06:29.390 17:16:37 -- common/autotest_common.sh@950 -- # wait 2982628 00:06:29.650 17:16:38 -- event/cpu_locks.sh@90 -- # killprocess 2982959 00:06:29.650 17:16:38 -- common/autotest_common.sh@926 -- # '[' -z 2982959 ']' 00:06:29.650 17:16:38 -- common/autotest_common.sh@930 -- # kill -0 2982959 00:06:29.650 17:16:38 -- common/autotest_common.sh@931 -- # uname 00:06:29.650 17:16:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.650 17:16:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2982959 00:06:29.650 17:16:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.650 17:16:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.650 17:16:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2982959' 00:06:29.650 killing process with pid 2982959 00:06:29.650 17:16:38 -- common/autotest_common.sh@945 -- # kill 2982959 00:06:29.650 17:16:38 -- common/autotest_common.sh@950 -- # wait 2982959 00:06:29.910 00:06:29.910 real 0m2.695s 00:06:29.910 user 0m2.951s 00:06:29.910 sys 0m0.795s 00:06:29.910 17:16:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.910 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:29.910 ************************************ 00:06:29.910 END TEST non_locking_app_on_locked_coremask 00:06:29.910 ************************************ 00:06:29.910 17:16:38 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.910 17:16:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.910 17:16:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.910 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:29.910 ************************************ 00:06:29.910 START TEST locking_app_on_unlocked_coremask 00:06:29.910 ************************************ 00:06:29.910 17:16:38 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:29.910 17:16:38 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2983337 00:06:29.910 17:16:38 -- event/cpu_locks.sh@99 -- # waitforlisten 2983337 /var/tmp/spdk.sock 00:06:29.910 17:16:38 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:29.910 17:16:38 -- common/autotest_common.sh@819 -- # '[' -z 2983337 ']' 00:06:29.910 17:16:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.910 17:16:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.910 17:16:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.910 17:16:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.910 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:30.171 [2024-10-13 17:16:38.448865] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:30.171 [2024-10-13 17:16:38.448928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983337 ] 00:06:30.171 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.171 [2024-10-13 17:16:38.513100] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.171 [2024-10-13 17:16:38.513138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.171 [2024-10-13 17:16:38.544031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.171 [2024-10-13 17:16:38.544201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.743 17:16:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.743 17:16:39 -- common/autotest_common.sh@852 -- # return 0 00:06:30.743 17:16:39 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2983430 00:06:30.743 17:16:39 -- event/cpu_locks.sh@103 -- # waitforlisten 2983430 /var/tmp/spdk2.sock 00:06:30.743 17:16:39 -- common/autotest_common.sh@819 -- # '[' -z 2983430 ']' 00:06:30.743 17:16:39 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.743 17:16:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.743 17:16:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.743 17:16:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.743 17:16:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.743 17:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.003 [2024-10-13 17:16:39.286126] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:31.003 [2024-10-13 17:16:39.286180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983430 ] 00:06:31.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.003 [2024-10-13 17:16:39.381248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.003 [2024-10-13 17:16:39.438307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.003 [2024-10-13 17:16:39.438441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.573 17:16:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.573 17:16:40 -- common/autotest_common.sh@852 -- # return 0 00:06:31.573 17:16:40 -- event/cpu_locks.sh@105 -- # locks_exist 2983430 00:06:31.573 17:16:40 -- event/cpu_locks.sh@22 -- # lslocks -p 2983430 00:06:31.573 17:16:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.833 lslocks: write error 00:06:31.833 17:16:40 -- event/cpu_locks.sh@107 -- # killprocess 2983337 00:06:31.833 17:16:40 -- common/autotest_common.sh@926 -- # '[' -z 2983337 ']' 00:06:31.833 17:16:40 -- common/autotest_common.sh@930 -- # kill -0 2983337 00:06:31.833 17:16:40 -- common/autotest_common.sh@931 -- # uname 00:06:31.833 17:16:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.833 17:16:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2983337 00:06:31.833 17:16:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.833 17:16:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.833 17:16:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2983337' 00:06:31.833 killing process with pid 2983337 00:06:31.833 17:16:40 -- common/autotest_common.sh@945 -- # kill 2983337 00:06:31.833 17:16:40 -- common/autotest_common.sh@950 -- # wait 2983337 00:06:32.404 17:16:40 -- event/cpu_locks.sh@108 -- # killprocess 2983430 00:06:32.404 17:16:40 -- common/autotest_common.sh@926 -- # '[' -z 2983430 ']' 00:06:32.404 17:16:40 -- common/autotest_common.sh@930 -- # kill -0 2983430 00:06:32.404 17:16:40 -- common/autotest_common.sh@931 -- # uname 00:06:32.404 17:16:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.404 17:16:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2983430 00:06:32.404 17:16:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.404 17:16:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.404 17:16:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2983430' 00:06:32.404 killing process with pid 2983430 00:06:32.404 17:16:40 -- common/autotest_common.sh@945 -- # kill 2983430 00:06:32.404 17:16:40 -- common/autotest_common.sh@950 -- # wait 2983430 00:06:32.665 00:06:32.665 real 0m2.550s 00:06:32.665 user 0m2.857s 00:06:32.665 sys 0m0.676s 00:06:32.665 17:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.665 17:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.665 ************************************ 00:06:32.665 END TEST locking_app_on_unlocked_coremask 00:06:32.665 ************************************ 00:06:32.665 17:16:40 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:32.665 17:16:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.665 17:16:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.665 17:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.665 ************************************ 00:06:32.665 START TEST locking_app_on_locked_coremask 00:06:32.665 ************************************ 00:06:32.665 17:16:40 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:32.665 17:16:40 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2983857 00:06:32.665 17:16:40 -- event/cpu_locks.sh@116 -- # waitforlisten 2983857 /var/tmp/spdk.sock 00:06:32.665 17:16:40 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.665 17:16:40 -- common/autotest_common.sh@819 -- # '[' -z 2983857 ']' 00:06:32.665 17:16:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.665 17:16:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.665 17:16:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.665 17:16:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.665 17:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.665 [2024-10-13 17:16:41.042965] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:32.665 [2024-10-13 17:16:41.043027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983857 ] 00:06:32.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.665 [2024-10-13 17:16:41.106359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.665 [2024-10-13 17:16:41.137536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.665 [2024-10-13 17:16:41.137679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.605 17:16:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.606 17:16:41 -- common/autotest_common.sh@852 -- # return 0 00:06:33.606 17:16:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2984055 00:06:33.606 17:16:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2984055 /var/tmp/spdk2.sock 00:06:33.606 17:16:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:33.606 17:16:41 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.606 17:16:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2984055 /var/tmp/spdk2.sock 00:06:33.606 17:16:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:33.606 17:16:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.606 17:16:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:33.606 17:16:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.606 17:16:41 -- common/autotest_common.sh@643 -- # waitforlisten 2984055 /var/tmp/spdk2.sock 00:06:33.606 17:16:41 -- common/autotest_common.sh@819 -- # '[' -z 2984055 ']' 00:06:33.606 17:16:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.606 17:16:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.606 17:16:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.606 17:16:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.606 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:06:33.606 [2024-10-13 17:16:41.866556] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:33.606 [2024-10-13 17:16:41.866610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984055 ] 00:06:33.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.606 [2024-10-13 17:16:41.962215] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2983857 has claimed it. 00:06:33.606 [2024-10-13 17:16:41.962261] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2984055) - No such process 00:06:34.178 ERROR: process (pid: 2984055) is no longer running 00:06:34.178 17:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.178 17:16:42 -- common/autotest_common.sh@852 -- # return 1 00:06:34.178 17:16:42 -- common/autotest_common.sh@643 -- # es=1 00:06:34.178 17:16:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.178 17:16:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.178 17:16:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.178 17:16:42 -- event/cpu_locks.sh@122 -- # locks_exist 2983857 00:06:34.178 17:16:42 -- event/cpu_locks.sh@22 -- # lslocks -p 2983857 00:06:34.178 17:16:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.748 lslocks: write error 00:06:34.748 17:16:42 -- event/cpu_locks.sh@124 -- # killprocess 2983857 00:06:34.748 17:16:42 -- common/autotest_common.sh@926 -- # '[' -z 2983857 ']' 00:06:34.748 17:16:42 -- common/autotest_common.sh@930 -- # kill -0 2983857 00:06:34.748 17:16:42 -- common/autotest_common.sh@931 -- # uname 00:06:34.748 17:16:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.748 17:16:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2983857 00:06:34.748 17:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.748 17:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.748 17:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2983857' 00:06:34.748 killing process with pid 2983857 00:06:34.748 17:16:43 -- common/autotest_common.sh@945 -- # kill 2983857 00:06:34.748 17:16:43 -- common/autotest_common.sh@950 -- # wait 2983857 00:06:34.748 00:06:34.748 real 0m2.253s 00:06:34.748 user 0m2.505s 00:06:34.748 sys 0m0.611s 00:06:34.748 17:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.748 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.748 ************************************ 00:06:34.748 END TEST locking_app_on_locked_coremask 00:06:34.748 ************************************ 00:06:35.009 17:16:43 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:35.009 17:16:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.009 17:16:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.009 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:35.009 ************************************ 00:06:35.009 START TEST locking_overlapped_coremask 00:06:35.009 ************************************ 00:06:35.009 17:16:43 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:35.009 17:16:43 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2984418 00:06:35.009 17:16:43 -- event/cpu_locks.sh@133 -- # waitforlisten 2984418 /var/tmp/spdk.sock 00:06:35.009 17:16:43 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:35.009 17:16:43 -- common/autotest_common.sh@819 -- # '[' -z 2984418 ']' 00:06:35.009 17:16:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.009 17:16:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.009 17:16:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.009 17:16:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.009 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:35.009 [2024-10-13 17:16:43.341279] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.009 [2024-10-13 17:16:43.341334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984418 ] 00:06:35.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.009 [2024-10-13 17:16:43.405283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.009 [2024-10-13 17:16:43.442207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.009 [2024-10-13 17:16:43.442463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.009 [2024-10-13 17:16:43.442578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.009 [2024-10-13 17:16:43.442581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.951 17:16:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.951 17:16:44 -- common/autotest_common.sh@852 -- # return 0 00:06:35.951 17:16:44 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2984466 00:06:35.951 17:16:44 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2984466 /var/tmp/spdk2.sock 00:06:35.951 17:16:44 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.951 17:16:44 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:35.951 17:16:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2984466 /var/tmp/spdk2.sock 00:06:35.951 17:16:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:35.951 17:16:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.951 17:16:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:35.951 17:16:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.951 17:16:44 -- common/autotest_common.sh@643 -- # waitforlisten 2984466 /var/tmp/spdk2.sock 00:06:35.951 17:16:44 -- common/autotest_common.sh@819 -- # '[' -z 2984466 ']' 00:06:35.951 17:16:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.951 17:16:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.951 17:16:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.951 17:16:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.951 17:16:44 -- common/autotest_common.sh@10 -- # set +x 00:06:35.951 [2024-10-13 17:16:44.213223] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.951 [2024-10-13 17:16:44.213275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984466 ] 00:06:35.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.951 [2024-10-13 17:16:44.288058] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2984418 has claimed it. 00:06:35.951 [2024-10-13 17:16:44.288093] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2984466) - No such process 00:06:36.522 ERROR: process (pid: 2984466) is no longer running 00:06:36.522 17:16:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.522 17:16:44 -- common/autotest_common.sh@852 -- # return 1 00:06:36.522 17:16:44 -- common/autotest_common.sh@643 -- # es=1 00:06:36.522 17:16:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.522 17:16:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.522 17:16:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.522 17:16:44 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:36.522 17:16:44 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.522 17:16:44 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.522 17:16:44 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.522 17:16:44 -- event/cpu_locks.sh@141 -- # killprocess 2984418 00:06:36.522 17:16:44 -- common/autotest_common.sh@926 -- # '[' -z 2984418 ']' 00:06:36.522 17:16:44 -- common/autotest_common.sh@930 -- # kill -0 2984418 00:06:36.522 17:16:44 -- common/autotest_common.sh@931 -- # uname 00:06:36.522 17:16:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.522 17:16:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2984418 00:06:36.522 17:16:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.522 17:16:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.522 17:16:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2984418' 00:06:36.522 killing process with pid 2984418 00:06:36.522 17:16:44 -- common/autotest_common.sh@945 -- # kill 2984418 00:06:36.522 17:16:44 -- common/autotest_common.sh@950 -- # wait 2984418 00:06:36.783 00:06:36.783 real 0m1.803s 00:06:36.783 user 0m5.259s 00:06:36.783 sys 0m0.383s 00:06:36.783 17:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.783 17:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.783 ************************************ 00:06:36.783 END TEST locking_overlapped_coremask 00:06:36.783 ************************************ 00:06:36.783 17:16:45 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.783 17:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.783 17:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.783 17:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.783 ************************************ 00:06:36.783 START TEST locking_overlapped_coremask_via_rpc 00:06:36.783 ************************************ 00:06:36.783 17:16:45 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:36.783 17:16:45 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2984799 00:06:36.783 17:16:45 -- event/cpu_locks.sh@149 -- # waitforlisten 2984799 /var/tmp/spdk.sock 00:06:36.783 17:16:45 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.783 17:16:45 -- common/autotest_common.sh@819 -- # '[' -z 2984799 ']' 00:06:36.783 17:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.783 17:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.783 17:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.783 17:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.783 17:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.783 [2024-10-13 17:16:45.193473] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.783 [2024-10-13 17:16:45.193533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984799 ] 00:06:36.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.783 [2024-10-13 17:16:45.256161] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.783 [2024-10-13 17:16:45.256194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.783 [2024-10-13 17:16:45.288632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.783 [2024-10-13 17:16:45.288887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.783 [2024-10-13 17:16:45.289007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.783 [2024-10-13 17:16:45.289010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.725 17:16:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.725 17:16:45 -- common/autotest_common.sh@852 -- # return 0 00:06:37.725 17:16:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2984890 00:06:37.725 17:16:45 -- event/cpu_locks.sh@153 -- # waitforlisten 2984890 /var/tmp/spdk2.sock 00:06:37.725 17:16:45 -- common/autotest_common.sh@819 -- # '[' -z 2984890 ']' 00:06:37.725 17:16:45 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.725 17:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.725 17:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.725 17:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.725 17:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.725 17:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 [2024-10-13 17:16:46.011165] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:37.725 [2024-10-13 17:16:46.011217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984890 ] 00:06:37.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.725 [2024-10-13 17:16:46.091507] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.725 [2024-10-13 17:16:46.091534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.725 [2024-10-13 17:16:46.143060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.725 [2024-10-13 17:16:46.147345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.725 [2024-10-13 17:16:46.147501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.725 [2024-10-13 17:16:46.147505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.295 17:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.295 17:16:46 -- common/autotest_common.sh@852 -- # return 0 00:06:38.295 17:16:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.295 17:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.295 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 17:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.295 17:16:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.295 17:16:46 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.295 17:16:46 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.295 17:16:46 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:38.296 17:16:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.296 17:16:46 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:38.296 17:16:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.296 17:16:46 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.296 17:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.296 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:38.296 [2024-10-13 17:16:46.787118] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2984799 has claimed it. 00:06:38.296 request: 00:06:38.296 { 00:06:38.296 "method": "framework_enable_cpumask_locks", 00:06:38.296 "req_id": 1 00:06:38.296 } 00:06:38.296 Got JSON-RPC error response 00:06:38.296 response: 00:06:38.296 { 00:06:38.296 "code": -32603, 00:06:38.296 "message": "Failed to claim CPU core: 2" 00:06:38.296 } 00:06:38.296 17:16:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:38.296 17:16:46 -- common/autotest_common.sh@643 -- # es=1 00:06:38.296 17:16:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.296 17:16:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.296 17:16:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.296 17:16:46 -- event/cpu_locks.sh@158 -- # waitforlisten 2984799 /var/tmp/spdk.sock 00:06:38.296 17:16:46 -- common/autotest_common.sh@819 -- # '[' -z 2984799 ']' 00:06:38.296 17:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.296 17:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.296 17:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.296 17:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.296 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:38.556 17:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.556 17:16:46 -- common/autotest_common.sh@852 -- # return 0 00:06:38.556 17:16:46 -- event/cpu_locks.sh@159 -- # waitforlisten 2984890 /var/tmp/spdk2.sock 00:06:38.556 17:16:46 -- common/autotest_common.sh@819 -- # '[' -z 2984890 ']' 00:06:38.556 17:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.556 17:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.556 17:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.556 17:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.556 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:38.816 17:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.816 17:16:47 -- common/autotest_common.sh@852 -- # return 0 00:06:38.816 17:16:47 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:38.816 17:16:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.816 17:16:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.816 17:16:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.816 00:06:38.816 real 0m1.998s 00:06:38.816 user 0m0.775s 00:06:38.816 sys 0m0.140s 00:06:38.816 17:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.816 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.816 ************************************ 00:06:38.816 END TEST locking_overlapped_coremask_via_rpc 00:06:38.816 ************************************ 00:06:38.816 17:16:47 -- event/cpu_locks.sh@174 -- # cleanup 00:06:38.816 17:16:47 -- event/cpu_locks.sh@15 -- # [[ -z 2984799 ]] 00:06:38.816 17:16:47 -- event/cpu_locks.sh@15 -- # killprocess 2984799 00:06:38.816 17:16:47 -- common/autotest_common.sh@926 -- # '[' -z 2984799 ']' 00:06:38.816 17:16:47 -- common/autotest_common.sh@930 -- # kill -0 2984799 00:06:38.817 17:16:47 -- common/autotest_common.sh@931 -- # uname 00:06:38.817 17:16:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.817 17:16:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2984799 00:06:38.817 17:16:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.817 17:16:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.817 17:16:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2984799' 00:06:38.817 killing process with pid 2984799 00:06:38.817 17:16:47 -- common/autotest_common.sh@945 -- # kill 2984799 00:06:38.817 17:16:47 -- common/autotest_common.sh@950 -- # wait 2984799 00:06:39.077 17:16:47 -- event/cpu_locks.sh@16 -- # [[ -z 2984890 ]] 00:06:39.077 17:16:47 -- event/cpu_locks.sh@16 -- # killprocess 2984890 00:06:39.077 17:16:47 -- common/autotest_common.sh@926 -- # '[' -z 2984890 ']' 00:06:39.077 17:16:47 -- common/autotest_common.sh@930 -- # kill -0 2984890 00:06:39.077 17:16:47 -- common/autotest_common.sh@931 -- # uname 00:06:39.077 17:16:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.077 17:16:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2984890 00:06:39.077 17:16:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:39.077 17:16:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:39.077 17:16:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2984890' 00:06:39.077 killing process with pid 2984890 00:06:39.077 17:16:47 -- common/autotest_common.sh@945 -- # kill 2984890 00:06:39.077 17:16:47 -- common/autotest_common.sh@950 -- # wait 2984890 00:06:39.338 17:16:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.338 17:16:47 -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.338 17:16:47 -- event/cpu_locks.sh@15 -- # [[ -z 2984799 ]] 00:06:39.338 17:16:47 -- event/cpu_locks.sh@15 -- # killprocess 2984799 00:06:39.338 17:16:47 -- common/autotest_common.sh@926 -- # '[' -z 2984799 ']' 00:06:39.338 17:16:47 -- common/autotest_common.sh@930 -- # kill -0 2984799 00:06:39.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2984799) - No such process 00:06:39.338 17:16:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2984799 is not found' 00:06:39.338 Process with pid 2984799 is not found 00:06:39.338 17:16:47 -- event/cpu_locks.sh@16 -- # [[ -z 2984890 ]] 00:06:39.338 17:16:47 -- event/cpu_locks.sh@16 -- # killprocess 2984890 00:06:39.338 17:16:47 -- common/autotest_common.sh@926 -- # '[' -z 2984890 ']' 00:06:39.338 17:16:47 -- common/autotest_common.sh@930 -- # kill -0 2984890 00:06:39.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2984890) - No such process 00:06:39.338 17:16:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2984890 is not found' 00:06:39.338 Process with pid 2984890 is not found 00:06:39.338 17:16:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.338 00:06:39.338 real 0m15.207s 00:06:39.338 user 0m27.288s 00:06:39.338 sys 0m4.363s 00:06:39.338 17:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.338 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.338 ************************************ 00:06:39.338 END TEST cpu_locks 00:06:39.338 ************************************ 00:06:39.338 00:06:39.338 real 0m41.144s 00:06:39.338 user 1m21.943s 00:06:39.338 sys 0m7.548s 00:06:39.338 17:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.338 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.338 ************************************ 00:06:39.338 END TEST event 00:06:39.338 ************************************ 00:06:39.338 17:16:47 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:39.338 17:16:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.338 17:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.338 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.338 ************************************ 00:06:39.338 START TEST thread 00:06:39.338 ************************************ 00:06:39.338 17:16:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:39.604 * Looking for test storage... 00:06:39.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:39.604 17:16:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.604 17:16:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:39.604 17:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.604 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.604 ************************************ 00:06:39.604 START TEST thread_poller_perf 00:06:39.604 ************************************ 00:06:39.604 17:16:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.604 [2024-10-13 17:16:47.911829] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:39.604 [2024-10-13 17:16:47.911942] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985426 ] 00:06:39.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.604 [2024-10-13 17:16:47.980944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.604 [2024-10-13 17:16:48.017373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.604 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.547 [2024-10-13T15:16:49.071Z] ====================================== 00:06:40.547 [2024-10-13T15:16:49.071Z] busy:2412524412 (cyc) 00:06:40.547 [2024-10-13T15:16:49.072Z] total_run_count: 276000 00:06:40.548 [2024-10-13T15:16:49.072Z] tsc_hz: 2400000000 (cyc) 00:06:40.548 [2024-10-13T15:16:49.072Z] ====================================== 00:06:40.548 [2024-10-13T15:16:49.072Z] poller_cost: 8741 (cyc), 3642 (nsec) 00:06:40.548 00:06:40.548 real 0m1.174s 00:06:40.548 user 0m1.093s 00:06:40.548 sys 0m0.076s 00:06:40.548 17:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.548 17:16:49 -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 ************************************ 00:06:40.548 END TEST thread_poller_perf 00:06:40.548 ************************************ 00:06:40.809 17:16:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.809 17:16:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:40.809 17:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.809 17:16:49 -- common/autotest_common.sh@10 -- # set +x 00:06:40.809 ************************************ 00:06:40.809 START TEST thread_poller_perf 00:06:40.809 ************************************ 00:06:40.809 17:16:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.809 [2024-10-13 17:16:49.127845] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:40.809 [2024-10-13 17:16:49.127944] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985604 ] 00:06:40.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.809 [2024-10-13 17:16:49.192559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.809 [2024-10-13 17:16:49.222469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.809 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.749 [2024-10-13T15:16:50.273Z] ====================================== 00:06:41.749 [2024-10-13T15:16:50.273Z] busy:2402817618 (cyc) 00:06:41.749 [2024-10-13T15:16:50.273Z] total_run_count: 3800000 00:06:41.749 [2024-10-13T15:16:50.273Z] tsc_hz: 2400000000 (cyc) 00:06:41.749 [2024-10-13T15:16:50.273Z] ====================================== 00:06:41.749 [2024-10-13T15:16:50.273Z] poller_cost: 632 (cyc), 263 (nsec) 00:06:41.749 00:06:41.749 real 0m1.157s 00:06:41.749 user 0m1.076s 00:06:41.749 sys 0m0.077s 00:06:41.749 17:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.749 17:16:50 -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 ************************************ 00:06:41.749 END TEST thread_poller_perf 00:06:41.749 ************************************ 00:06:42.010 17:16:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.010 00:06:42.010 real 0m2.511s 00:06:42.010 user 0m2.241s 00:06:42.010 sys 0m0.281s 00:06:42.010 17:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.010 17:16:50 -- common/autotest_common.sh@10 -- # set +x 00:06:42.010 ************************************ 00:06:42.010 END TEST thread 00:06:42.010 ************************************ 00:06:42.010 17:16:50 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.010 17:16:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.010 17:16:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.010 17:16:50 -- common/autotest_common.sh@10 -- # set +x 00:06:42.010 ************************************ 00:06:42.010 START TEST accel 00:06:42.010 ************************************ 00:06:42.010 17:16:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.010 * Looking for test storage... 00:06:42.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:42.010 17:16:50 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:42.010 17:16:50 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:42.010 17:16:50 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.010 17:16:50 -- accel/accel.sh@59 -- # spdk_tgt_pid=2985994 00:06:42.010 17:16:50 -- accel/accel.sh@60 -- # waitforlisten 2985994 00:06:42.010 17:16:50 -- common/autotest_common.sh@819 -- # '[' -z 2985994 ']' 00:06:42.010 17:16:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.010 17:16:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.010 17:16:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.010 17:16:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.010 17:16:50 -- common/autotest_common.sh@10 -- # set +x 00:06:42.010 17:16:50 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:42.010 17:16:50 -- accel/accel.sh@58 -- # build_accel_config 00:06:42.010 17:16:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.010 17:16:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.010 17:16:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.010 17:16:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.010 17:16:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.010 17:16:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.010 17:16:50 -- accel/accel.sh@42 -- # jq -r . 00:06:42.010 [2024-10-13 17:16:50.462889] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.010 [2024-10-13 17:16:50.462955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985994 ] 00:06:42.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.010 [2024-10-13 17:16:50.527173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.270 [2024-10-13 17:16:50.562240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.270 [2024-10-13 17:16:50.562392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.841 17:16:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.841 17:16:51 -- common/autotest_common.sh@852 -- # return 0 00:06:42.841 17:16:51 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:42.841 17:16:51 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:42.841 17:16:51 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:42.841 17:16:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.841 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.841 17:16:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.841 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.841 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.841 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.842 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.842 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.842 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.842 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.842 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.842 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.842 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.842 17:16:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # IFS== 00:06:42.842 17:16:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:42.842 17:16:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:42.842 17:16:51 -- accel/accel.sh@67 -- # killprocess 2985994 00:06:42.842 17:16:51 -- common/autotest_common.sh@926 -- # '[' -z 2985994 ']' 00:06:42.842 17:16:51 -- common/autotest_common.sh@930 -- # kill -0 2985994 00:06:42.842 17:16:51 -- common/autotest_common.sh@931 -- # uname 00:06:42.842 17:16:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:42.842 17:16:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2985994 00:06:42.842 17:16:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:42.842 17:16:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:42.842 17:16:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2985994' 00:06:42.842 killing process with pid 2985994 00:06:42.842 17:16:51 -- common/autotest_common.sh@945 -- # kill 2985994 00:06:42.842 17:16:51 -- common/autotest_common.sh@950 -- # wait 2985994 00:06:43.102 17:16:51 -- accel/accel.sh@68 -- # trap - ERR 00:06:43.102 17:16:51 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:43.102 17:16:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:43.102 17:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.102 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.102 17:16:51 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:43.103 17:16:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:43.103 17:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.103 17:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.103 17:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.103 17:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.103 17:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.103 17:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.103 17:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.103 17:16:51 -- accel/accel.sh@42 -- # jq -r . 00:06:43.103 17:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.103 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.103 17:16:51 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:43.103 17:16:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:43.103 17:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.103 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.103 ************************************ 00:06:43.103 START TEST accel_missing_filename 00:06:43.103 ************************************ 00:06:43.103 17:16:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:43.103 17:16:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.103 17:16:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:43.103 17:16:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:43.103 17:16:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.103 17:16:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:43.103 17:16:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.103 17:16:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:43.103 17:16:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:43.103 17:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.103 17:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.103 17:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.103 17:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.103 17:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.103 17:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.103 17:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.103 17:16:51 -- accel/accel.sh@42 -- # jq -r . 00:06:43.103 [2024-10-13 17:16:51.623106] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.103 [2024-10-13 17:16:51.623181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986360 ] 00:06:43.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.363 [2024-10-13 17:16:51.687146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.363 [2024-10-13 17:16:51.716443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.363 [2024-10-13 17:16:51.748395] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.363 [2024-10-13 17:16:51.785509] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:43.363 A filename is required. 00:06:43.363 17:16:51 -- common/autotest_common.sh@643 -- # es=234 00:06:43.363 17:16:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.363 17:16:51 -- common/autotest_common.sh@652 -- # es=106 00:06:43.363 17:16:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:43.363 17:16:51 -- common/autotest_common.sh@660 -- # es=1 00:06:43.363 17:16:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.363 00:06:43.363 real 0m0.227s 00:06:43.363 user 0m0.161s 00:06:43.363 sys 0m0.105s 00:06:43.363 17:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.363 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.363 ************************************ 00:06:43.363 END TEST accel_missing_filename 00:06:43.363 ************************************ 00:06:43.363 17:16:51 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.363 17:16:51 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:43.363 17:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.363 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.363 ************************************ 00:06:43.363 START TEST accel_compress_verify 00:06:43.363 ************************************ 00:06:43.363 17:16:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.363 17:16:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.363 17:16:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.363 17:16:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:43.363 17:16:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.363 17:16:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:43.363 17:16:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.363 17:16:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.363 17:16:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.363 17:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.363 17:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.363 17:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.363 17:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.363 17:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.363 17:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.363 17:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.363 17:16:51 -- accel/accel.sh@42 -- # jq -r . 00:06:43.624 [2024-10-13 17:16:51.889739] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.624 [2024-10-13 17:16:51.889830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986388 ] 00:06:43.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.624 [2024-10-13 17:16:51.953777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.624 [2024-10-13 17:16:51.983309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.624 [2024-10-13 17:16:52.015177] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.624 [2024-10-13 17:16:52.051960] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:43.624 00:06:43.624 Compression does not support the verify option, aborting. 00:06:43.624 17:16:52 -- common/autotest_common.sh@643 -- # es=161 00:06:43.624 17:16:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.624 17:16:52 -- common/autotest_common.sh@652 -- # es=33 00:06:43.624 17:16:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:43.624 17:16:52 -- common/autotest_common.sh@660 -- # es=1 00:06:43.624 17:16:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.624 00:06:43.624 real 0m0.228s 00:06:43.624 user 0m0.169s 00:06:43.624 sys 0m0.103s 00:06:43.624 17:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.624 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.624 ************************************ 00:06:43.624 END TEST accel_compress_verify 00:06:43.624 ************************************ 00:06:43.624 17:16:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:43.624 17:16:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:43.624 17:16:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.624 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.624 ************************************ 00:06:43.624 START TEST accel_wrong_workload 00:06:43.624 ************************************ 00:06:43.624 17:16:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:43.624 17:16:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.624 17:16:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:43.624 17:16:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:43.624 17:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.624 17:16:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:43.624 17:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.624 17:16:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:43.624 17:16:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:43.624 17:16:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.624 17:16:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.624 17:16:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.624 17:16:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.624 17:16:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.624 17:16:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.624 17:16:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.624 17:16:52 -- accel/accel.sh@42 -- # jq -r . 00:06:43.886 Unsupported workload type: foobar 00:06:43.886 [2024-10-13 17:16:52.153731] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:43.886 accel_perf options: 00:06:43.886 [-h help message] 00:06:43.886 [-q queue depth per core] 00:06:43.886 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.886 [-T number of threads per core 00:06:43.886 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.886 [-t time in seconds] 00:06:43.886 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.886 [ dif_verify, , dif_generate, dif_generate_copy 00:06:43.886 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.886 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.886 [-S for crc32c workload, use this seed value (default 0) 00:06:43.886 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.886 [-f for fill workload, use this BYTE value (default 255) 00:06:43.886 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.886 [-y verify result if this switch is on] 00:06:43.886 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.886 Can be used to spread operations across a wider range of memory. 00:06:43.886 17:16:52 -- common/autotest_common.sh@643 -- # es=1 00:06:43.886 17:16:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.886 17:16:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.886 17:16:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.886 00:06:43.886 real 0m0.031s 00:06:43.886 user 0m0.020s 00:06:43.886 sys 0m0.011s 00:06:43.886 17:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.886 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 ************************************ 00:06:43.886 END TEST accel_wrong_workload 00:06:43.886 ************************************ 00:06:43.886 Error: writing output failed: Broken pipe 00:06:43.886 17:16:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.886 17:16:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:43.886 17:16:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.886 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 ************************************ 00:06:43.886 START TEST accel_negative_buffers 00:06:43.886 ************************************ 00:06:43.886 17:16:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.886 17:16:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.886 17:16:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:43.886 17:16:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:43.886 17:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.886 17:16:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:43.886 17:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.886 17:16:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:43.886 17:16:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:43.886 17:16:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.886 17:16:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.886 17:16:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.886 17:16:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.886 17:16:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.886 17:16:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.886 17:16:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.886 17:16:52 -- accel/accel.sh@42 -- # jq -r . 00:06:43.886 -x option must be non-negative. 00:06:43.886 [2024-10-13 17:16:52.222060] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:43.886 accel_perf options: 00:06:43.886 [-h help message] 00:06:43.886 [-q queue depth per core] 00:06:43.886 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.886 [-T number of threads per core 00:06:43.886 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.886 [-t time in seconds] 00:06:43.886 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.886 [ dif_verify, , dif_generate, dif_generate_copy 00:06:43.886 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.886 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.886 [-S for crc32c workload, use this seed value (default 0) 00:06:43.886 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.886 [-f for fill workload, use this BYTE value (default 255) 00:06:43.886 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.886 [-y verify result if this switch is on] 00:06:43.886 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.886 Can be used to spread operations across a wider range of memory. 00:06:43.886 17:16:52 -- common/autotest_common.sh@643 -- # es=1 00:06:43.886 17:16:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.886 17:16:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.886 17:16:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.886 00:06:43.886 real 0m0.031s 00:06:43.886 user 0m0.017s 00:06:43.886 sys 0m0.014s 00:06:43.886 17:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.886 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 ************************************ 00:06:43.886 END TEST accel_negative_buffers 00:06:43.886 ************************************ 00:06:43.886 Error: writing output failed: Broken pipe 00:06:43.886 17:16:52 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:43.886 17:16:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:43.886 17:16:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.886 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 ************************************ 00:06:43.886 START TEST accel_crc32c 00:06:43.886 ************************************ 00:06:43.886 17:16:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:43.886 17:16:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.886 17:16:52 -- accel/accel.sh@17 -- # local accel_module 00:06:43.886 17:16:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:43.886 17:16:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:43.886 17:16:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.886 17:16:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.886 17:16:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.886 17:16:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.886 17:16:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.886 17:16:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.886 17:16:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.886 17:16:52 -- accel/accel.sh@42 -- # jq -r . 00:06:43.886 [2024-10-13 17:16:52.295845] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.886 [2024-10-13 17:16:52.295915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986442 ] 00:06:43.886 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.887 [2024-10-13 17:16:52.359807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.887 [2024-10-13 17:16:52.387662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.272 17:16:53 -- accel/accel.sh@18 -- # out=' 00:06:45.272 SPDK Configuration: 00:06:45.272 Core mask: 0x1 00:06:45.272 00:06:45.272 Accel Perf Configuration: 00:06:45.272 Workload Type: crc32c 00:06:45.272 CRC-32C seed: 32 00:06:45.272 Transfer size: 4096 bytes 00:06:45.272 Vector count 1 00:06:45.272 Module: software 00:06:45.272 Queue depth: 32 00:06:45.272 Allocate depth: 32 00:06:45.272 # threads/core: 1 00:06:45.272 Run time: 1 seconds 00:06:45.272 Verify: Yes 00:06:45.272 00:06:45.272 Running for 1 seconds... 00:06:45.272 00:06:45.272 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.272 ------------------------------------------------------------------------------------ 00:06:45.272 0,0 446496/s 1744 MiB/s 0 0 00:06:45.272 ==================================================================================== 00:06:45.272 Total 446496/s 1744 MiB/s 0 0' 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.272 17:16:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:45.272 17:16:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.272 17:16:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.272 17:16:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.272 17:16:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.272 17:16:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.272 17:16:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.272 17:16:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.272 17:16:53 -- accel/accel.sh@42 -- # jq -r . 00:06:45.272 [2024-10-13 17:16:53.524699] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.272 [2024-10-13 17:16:53.524780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986778 ] 00:06:45.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.272 [2024-10-13 17:16:53.587412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.272 [2024-10-13 17:16:53.615182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val=0x1 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val=crc32c 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val=32 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val=software 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.272 17:16:53 -- accel/accel.sh@21 -- # val=32 00:06:45.272 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.272 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.273 17:16:53 -- accel/accel.sh@21 -- # val=32 00:06:45.273 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.273 17:16:53 -- accel/accel.sh@21 -- # val=1 00:06:45.273 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.273 17:16:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.273 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.273 17:16:53 -- accel/accel.sh@21 -- # val=Yes 00:06:45.273 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.273 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.273 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.273 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:45.273 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.273 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.331 17:16:54 -- accel/accel.sh@21 -- # val= 00:06:46.331 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.331 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.331 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.331 17:16:54 -- accel/accel.sh@21 -- # val= 00:06:46.331 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.331 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.332 17:16:54 -- accel/accel.sh@21 -- # val= 00:06:46.332 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.332 17:16:54 -- accel/accel.sh@21 -- # val= 00:06:46.332 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.332 17:16:54 -- accel/accel.sh@21 -- # val= 00:06:46.332 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.332 17:16:54 -- accel/accel.sh@21 -- # val= 00:06:46.332 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.332 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.332 17:16:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.332 17:16:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:46.332 17:16:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.332 00:06:46.332 real 0m2.457s 00:06:46.332 user 0m2.250s 00:06:46.332 sys 0m0.202s 00:06:46.332 17:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.332 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.332 ************************************ 00:06:46.332 END TEST accel_crc32c 00:06:46.332 ************************************ 00:06:46.332 17:16:54 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:46.332 17:16:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:46.332 17:16:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.332 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.332 ************************************ 00:06:46.332 START TEST accel_crc32c_C2 00:06:46.332 ************************************ 00:06:46.332 17:16:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:46.332 17:16:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.332 17:16:54 -- accel/accel.sh@17 -- # local accel_module 00:06:46.332 17:16:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:46.332 17:16:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:46.332 17:16:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.332 17:16:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.332 17:16:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.332 17:16:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.332 17:16:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.332 17:16:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.332 17:16:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.332 17:16:54 -- accel/accel.sh@42 -- # jq -r . 00:06:46.332 [2024-10-13 17:16:54.796449] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.332 [2024-10-13 17:16:54.796547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987012 ] 00:06:46.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.601 [2024-10-13 17:16:54.864390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.601 [2024-10-13 17:16:54.895169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.543 17:16:56 -- accel/accel.sh@18 -- # out=' 00:06:47.543 SPDK Configuration: 00:06:47.543 Core mask: 0x1 00:06:47.543 00:06:47.543 Accel Perf Configuration: 00:06:47.543 Workload Type: crc32c 00:06:47.543 CRC-32C seed: 0 00:06:47.543 Transfer size: 4096 bytes 00:06:47.543 Vector count 2 00:06:47.543 Module: software 00:06:47.543 Queue depth: 32 00:06:47.543 Allocate depth: 32 00:06:47.543 # threads/core: 1 00:06:47.543 Run time: 1 seconds 00:06:47.543 Verify: Yes 00:06:47.543 00:06:47.543 Running for 1 seconds... 00:06:47.543 00:06:47.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.543 ------------------------------------------------------------------------------------ 00:06:47.543 0,0 376544/s 2941 MiB/s 0 0 00:06:47.543 ==================================================================================== 00:06:47.543 Total 376544/s 1470 MiB/s 0 0' 00:06:47.543 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.543 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.543 17:16:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:47.543 17:16:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:47.543 17:16:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.543 17:16:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.543 17:16:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.543 17:16:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.543 17:16:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.543 17:16:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.543 17:16:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.543 17:16:56 -- accel/accel.sh@42 -- # jq -r . 00:06:47.543 [2024-10-13 17:16:56.034349] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:47.543 [2024-10-13 17:16:56.034456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987164 ] 00:06:47.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.803 [2024-10-13 17:16:56.106347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.803 [2024-10-13 17:16:56.135701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val=0x1 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val=crc32c 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val=0 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.803 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.803 17:16:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.803 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val=software 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val=32 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val=32 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val=1 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val=Yes 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:47.804 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:47.804 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:47.804 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:48.743 17:16:57 -- accel/accel.sh@21 -- # val= 00:06:48.743 17:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:48.743 17:16:57 -- accel/accel.sh@21 -- # val= 00:06:48.743 17:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:48.743 17:16:57 -- accel/accel.sh@21 -- # val= 00:06:48.743 17:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:48.743 17:16:57 -- accel/accel.sh@21 -- # val= 00:06:48.743 17:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:48.743 17:16:57 -- accel/accel.sh@21 -- # val= 00:06:48.743 17:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:48.743 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:48.743 17:16:57 -- accel/accel.sh@21 -- # val= 00:06:48.744 17:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.744 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:48.744 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:48.744 17:16:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.744 17:16:57 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:48.744 17:16:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.744 00:06:48.744 real 0m2.482s 00:06:48.744 user 0m2.265s 00:06:48.744 sys 0m0.212s 00:06:48.744 17:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.744 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.744 ************************************ 00:06:48.744 END TEST accel_crc32c_C2 00:06:48.744 ************************************ 00:06:49.004 17:16:57 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:49.004 17:16:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:49.004 17:16:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.004 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:06:49.004 ************************************ 00:06:49.004 START TEST accel_copy 00:06:49.004 ************************************ 00:06:49.004 17:16:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:49.004 17:16:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.004 17:16:57 -- accel/accel.sh@17 -- # local accel_module 00:06:49.004 17:16:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:49.004 17:16:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:49.004 17:16:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.004 17:16:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.004 17:16:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.004 17:16:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.004 17:16:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.004 17:16:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.004 17:16:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.004 17:16:57 -- accel/accel.sh@42 -- # jq -r . 00:06:49.004 [2024-10-13 17:16:57.311119] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.004 [2024-10-13 17:16:57.311188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987510 ] 00:06:49.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.004 [2024-10-13 17:16:57.372835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.004 [2024-10-13 17:16:57.400915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.386 17:16:58 -- accel/accel.sh@18 -- # out=' 00:06:50.386 SPDK Configuration: 00:06:50.386 Core mask: 0x1 00:06:50.386 00:06:50.386 Accel Perf Configuration: 00:06:50.386 Workload Type: copy 00:06:50.386 Transfer size: 4096 bytes 00:06:50.386 Vector count 1 00:06:50.386 Module: software 00:06:50.386 Queue depth: 32 00:06:50.386 Allocate depth: 32 00:06:50.386 # threads/core: 1 00:06:50.386 Run time: 1 seconds 00:06:50.386 Verify: Yes 00:06:50.386 00:06:50.386 Running for 1 seconds... 00:06:50.386 00:06:50.386 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.386 ------------------------------------------------------------------------------------ 00:06:50.386 0,0 304800/s 1190 MiB/s 0 0 00:06:50.386 ==================================================================================== 00:06:50.386 Total 304800/s 1190 MiB/s 0 0' 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:50.386 17:16:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:50.386 17:16:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.386 17:16:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.386 17:16:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.386 17:16:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.386 17:16:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.386 17:16:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.386 17:16:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.386 17:16:58 -- accel/accel.sh@42 -- # jq -r . 00:06:50.386 [2024-10-13 17:16:58.538012] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.386 [2024-10-13 17:16:58.538098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987846 ] 00:06:50.386 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.386 [2024-10-13 17:16:58.600747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.386 [2024-10-13 17:16:58.629844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=0x1 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=copy 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=software 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=32 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=32 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=1 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val=Yes 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.386 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:50.386 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.386 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:51.326 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:51.326 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:51.326 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:51.326 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:51.326 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:51.326 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.326 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.326 17:16:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.326 17:16:59 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:51.326 17:16:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.326 00:06:51.326 real 0m2.462s 00:06:51.326 user 0m2.261s 00:06:51.326 sys 0m0.196s 00:06:51.326 17:16:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.326 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.326 ************************************ 00:06:51.326 END TEST accel_copy 00:06:51.326 ************************************ 00:06:51.326 17:16:59 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.326 17:16:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:51.326 17:16:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.326 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.326 ************************************ 00:06:51.326 START TEST accel_fill 00:06:51.326 ************************************ 00:06:51.326 17:16:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.326 17:16:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.326 17:16:59 -- accel/accel.sh@17 -- # local accel_module 00:06:51.326 17:16:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.326 17:16:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.326 17:16:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.326 17:16:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.326 17:16:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.326 17:16:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.326 17:16:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.326 17:16:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.326 17:16:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.326 17:16:59 -- accel/accel.sh@42 -- # jq -r . 00:06:51.326 [2024-10-13 17:16:59.819324] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.326 [2024-10-13 17:16:59.819418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988057 ] 00:06:51.587 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.587 [2024-10-13 17:16:59.892513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.587 [2024-10-13 17:16:59.920735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.526 17:17:01 -- accel/accel.sh@18 -- # out=' 00:06:52.526 SPDK Configuration: 00:06:52.526 Core mask: 0x1 00:06:52.526 00:06:52.526 Accel Perf Configuration: 00:06:52.526 Workload Type: fill 00:06:52.526 Fill pattern: 0x80 00:06:52.526 Transfer size: 4096 bytes 00:06:52.526 Vector count 1 00:06:52.526 Module: software 00:06:52.526 Queue depth: 64 00:06:52.526 Allocate depth: 64 00:06:52.526 # threads/core: 1 00:06:52.526 Run time: 1 seconds 00:06:52.526 Verify: Yes 00:06:52.526 00:06:52.526 Running for 1 seconds... 00:06:52.526 00:06:52.526 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.526 ------------------------------------------------------------------------------------ 00:06:52.526 0,0 467648/s 1826 MiB/s 0 0 00:06:52.526 ==================================================================================== 00:06:52.526 Total 467648/s 1826 MiB/s 0 0' 00:06:52.526 17:17:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.526 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.526 17:17:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.526 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.526 17:17:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.526 17:17:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.526 17:17:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.526 17:17:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.526 17:17:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.526 17:17:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.526 17:17:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.526 17:17:01 -- accel/accel.sh@42 -- # jq -r . 00:06:52.526 [2024-10-13 17:17:01.043235] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.526 [2024-10-13 17:17:01.043280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988218 ] 00:06:52.787 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.787 [2024-10-13 17:17:01.095319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.787 [2024-10-13 17:17:01.123545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=0x1 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=fill 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=0x80 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=software 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=64 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=64 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=1 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val=Yes 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.787 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:52.787 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:52.787 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@21 -- # val= 00:06:53.728 17:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@21 -- # val= 00:06:53.728 17:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@21 -- # val= 00:06:53.728 17:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@21 -- # val= 00:06:53.728 17:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@21 -- # val= 00:06:53.728 17:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@21 -- # val= 00:06:53.728 17:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.728 17:17:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.728 17:17:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.728 17:17:02 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:53.728 17:17:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.728 00:06:53.728 real 0m2.449s 00:06:53.728 user 0m2.260s 00:06:53.728 sys 0m0.196s 00:06:53.728 17:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.728 17:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.728 ************************************ 00:06:53.728 END TEST accel_fill 00:06:53.728 ************************************ 00:06:53.988 17:17:02 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:53.988 17:17:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.988 17:17:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.988 17:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.988 ************************************ 00:06:53.988 START TEST accel_copy_crc32c 00:06:53.988 ************************************ 00:06:53.988 17:17:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:53.988 17:17:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.988 17:17:02 -- accel/accel.sh@17 -- # local accel_module 00:06:53.988 17:17:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:53.988 17:17:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:53.988 17:17:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.988 17:17:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.988 17:17:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.988 17:17:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.988 17:17:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.988 17:17:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.988 17:17:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.988 17:17:02 -- accel/accel.sh@42 -- # jq -r . 00:06:53.988 [2024-10-13 17:17:02.308792] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:53.988 [2024-10-13 17:17:02.308865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988569 ] 00:06:53.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.988 [2024-10-13 17:17:02.371523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.988 [2024-10-13 17:17:02.402274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.371 17:17:03 -- accel/accel.sh@18 -- # out=' 00:06:55.371 SPDK Configuration: 00:06:55.371 Core mask: 0x1 00:06:55.371 00:06:55.371 Accel Perf Configuration: 00:06:55.371 Workload Type: copy_crc32c 00:06:55.371 CRC-32C seed: 0 00:06:55.371 Vector size: 4096 bytes 00:06:55.371 Transfer size: 4096 bytes 00:06:55.371 Vector count 1 00:06:55.371 Module: software 00:06:55.371 Queue depth: 32 00:06:55.371 Allocate depth: 32 00:06:55.371 # threads/core: 1 00:06:55.371 Run time: 1 seconds 00:06:55.371 Verify: Yes 00:06:55.371 00:06:55.371 Running for 1 seconds... 00:06:55.371 00:06:55.371 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.371 ------------------------------------------------------------------------------------ 00:06:55.371 0,0 248032/s 968 MiB/s 0 0 00:06:55.371 ==================================================================================== 00:06:55.371 Total 248032/s 968 MiB/s 0 0' 00:06:55.371 17:17:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.371 17:17:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.371 17:17:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.371 17:17:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.371 17:17:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.371 17:17:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.371 17:17:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.371 17:17:03 -- accel/accel.sh@42 -- # jq -r . 00:06:55.371 [2024-10-13 17:17:03.523735] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.371 [2024-10-13 17:17:03.523779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988899 ] 00:06:55.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.371 [2024-10-13 17:17:03.575216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.371 [2024-10-13 17:17:03.603208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val=0x1 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val=0 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.371 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.371 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.371 17:17:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val=software 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val=32 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val=32 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val=1 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val=Yes 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.372 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:55.372 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.372 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:56.313 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:56.313 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:56.313 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:56.313 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:56.313 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:56.313 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:56.313 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:56.313 17:17:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.313 17:17:04 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:56.313 17:17:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.313 00:06:56.313 real 0m2.439s 00:06:56.313 user 0m2.270s 00:06:56.313 sys 0m0.178s 00:06:56.313 17:17:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.313 17:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:56.313 ************************************ 00:06:56.313 END TEST accel_copy_crc32c 00:06:56.313 ************************************ 00:06:56.313 17:17:04 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.313 17:17:04 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:56.313 17:17:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.313 17:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:56.313 ************************************ 00:06:56.313 START TEST accel_copy_crc32c_C2 00:06:56.313 ************************************ 00:06:56.313 17:17:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.313 17:17:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.313 17:17:04 -- accel/accel.sh@17 -- # local accel_module 00:06:56.313 17:17:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:56.313 17:17:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:56.313 17:17:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.313 17:17:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.313 17:17:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.313 17:17:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.313 17:17:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.313 17:17:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.313 17:17:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.313 17:17:04 -- accel/accel.sh@42 -- # jq -r . 00:06:56.313 [2024-10-13 17:17:04.787657] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.313 [2024-10-13 17:17:04.787749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989064 ] 00:06:56.313 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.574 [2024-10-13 17:17:04.851261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.574 [2024-10-13 17:17:04.881404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.515 17:17:05 -- accel/accel.sh@18 -- # out=' 00:06:57.515 SPDK Configuration: 00:06:57.515 Core mask: 0x1 00:06:57.515 00:06:57.515 Accel Perf Configuration: 00:06:57.515 Workload Type: copy_crc32c 00:06:57.515 CRC-32C seed: 0 00:06:57.515 Vector size: 4096 bytes 00:06:57.515 Transfer size: 8192 bytes 00:06:57.515 Vector count 2 00:06:57.515 Module: software 00:06:57.515 Queue depth: 32 00:06:57.515 Allocate depth: 32 00:06:57.515 # threads/core: 1 00:06:57.515 Run time: 1 seconds 00:06:57.515 Verify: Yes 00:06:57.515 00:06:57.515 Running for 1 seconds... 00:06:57.515 00:06:57.515 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.515 ------------------------------------------------------------------------------------ 00:06:57.515 0,0 186816/s 1459 MiB/s 0 0 00:06:57.515 ==================================================================================== 00:06:57.515 Total 186816/s 729 MiB/s 0 0' 00:06:57.515 17:17:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:57.515 17:17:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.515 17:17:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:57.515 17:17:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.515 17:17:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.515 17:17:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.515 17:17:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.515 17:17:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.515 17:17:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.515 17:17:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.515 17:17:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.515 17:17:05 -- accel/accel.sh@42 -- # jq -r . 00:06:57.515 [2024-10-13 17:17:06.020006] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.515 [2024-10-13 17:17:06.020108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989270 ] 00:06:57.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.776 [2024-10-13 17:17:06.084013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.776 [2024-10-13 17:17:06.111213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=0x1 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=0 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=software 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=32 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=32 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=1 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val=Yes 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.776 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:57.776 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.776 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:58.745 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:58.745 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:58.745 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:58.745 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:58.745 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:58.745 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:58.745 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.745 17:17:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.745 17:17:07 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:58.745 17:17:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.745 00:06:58.745 real 0m2.469s 00:06:58.745 user 0m2.279s 00:06:58.745 sys 0m0.198s 00:06:58.745 17:17:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.745 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.745 ************************************ 00:06:58.745 END TEST accel_copy_crc32c_C2 00:06:58.745 ************************************ 00:06:58.745 17:17:07 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:58.745 17:17:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.745 17:17:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.745 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.745 ************************************ 00:06:58.745 START TEST accel_dualcast 00:06:58.745 ************************************ 00:06:58.745 17:17:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:58.745 17:17:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.745 17:17:07 -- accel/accel.sh@17 -- # local accel_module 00:06:58.745 17:17:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:58.745 17:17:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:58.745 17:17:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.745 17:17:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.745 17:17:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.745 17:17:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.745 17:17:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.745 17:17:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.745 17:17:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.745 17:17:07 -- accel/accel.sh@42 -- # jq -r . 00:06:59.006 [2024-10-13 17:17:07.289800] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.006 [2024-10-13 17:17:07.289876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989625 ] 00:06:59.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.006 [2024-10-13 17:17:07.361799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.006 [2024-10-13 17:17:07.390409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.393 17:17:08 -- accel/accel.sh@18 -- # out=' 00:07:00.393 SPDK Configuration: 00:07:00.393 Core mask: 0x1 00:07:00.393 00:07:00.393 Accel Perf Configuration: 00:07:00.393 Workload Type: dualcast 00:07:00.393 Transfer size: 4096 bytes 00:07:00.393 Vector count 1 00:07:00.393 Module: software 00:07:00.393 Queue depth: 32 00:07:00.393 Allocate depth: 32 00:07:00.393 # threads/core: 1 00:07:00.394 Run time: 1 seconds 00:07:00.394 Verify: Yes 00:07:00.394 00:07:00.394 Running for 1 seconds... 00:07:00.394 00:07:00.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.394 ------------------------------------------------------------------------------------ 00:07:00.394 0,0 365856/s 1429 MiB/s 0 0 00:07:00.394 ==================================================================================== 00:07:00.394 Total 365856/s 1429 MiB/s 0 0' 00:07:00.394 17:17:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.394 17:17:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.394 17:17:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.394 17:17:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.394 17:17:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.394 17:17:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.394 17:17:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.394 17:17:08 -- accel/accel.sh@42 -- # jq -r . 00:07:00.394 [2024-10-13 17:17:08.511836] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:00.394 [2024-10-13 17:17:08.511879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989961 ] 00:07:00.394 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.394 [2024-10-13 17:17:08.563353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.394 [2024-10-13 17:17:08.591187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=0x1 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=dualcast 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=software 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=32 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=32 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=1 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val=Yes 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.394 17:17:08 -- accel/accel.sh@21 -- # val= 00:07:00.394 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.394 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@21 -- # val= 00:07:01.335 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@21 -- # val= 00:07:01.335 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@21 -- # val= 00:07:01.335 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@21 -- # val= 00:07:01.335 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@21 -- # val= 00:07:01.335 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@21 -- # val= 00:07:01.335 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.335 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.335 17:17:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.335 17:17:09 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:01.335 17:17:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.335 00:07:01.335 real 0m2.444s 00:07:01.335 user 0m2.253s 00:07:01.335 sys 0m0.198s 00:07:01.335 17:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.335 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:07:01.335 ************************************ 00:07:01.335 END TEST accel_dualcast 00:07:01.335 ************************************ 00:07:01.335 17:17:09 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:01.335 17:17:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:01.335 17:17:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.335 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:07:01.335 ************************************ 00:07:01.335 START TEST accel_compare 00:07:01.335 ************************************ 00:07:01.335 17:17:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:01.335 17:17:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.335 17:17:09 -- accel/accel.sh@17 -- # local accel_module 00:07:01.335 17:17:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:01.335 17:17:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.335 17:17:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.335 17:17:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.335 17:17:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.335 17:17:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.335 17:17:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.335 17:17:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.335 17:17:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.335 17:17:09 -- accel/accel.sh@42 -- # jq -r . 00:07:01.335 [2024-10-13 17:17:09.777496] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:01.335 [2024-10-13 17:17:09.777587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990083 ] 00:07:01.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.335 [2024-10-13 17:17:09.842990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.597 [2024-10-13 17:17:09.873612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.538 17:17:10 -- accel/accel.sh@18 -- # out=' 00:07:02.538 SPDK Configuration: 00:07:02.538 Core mask: 0x1 00:07:02.538 00:07:02.538 Accel Perf Configuration: 00:07:02.538 Workload Type: compare 00:07:02.538 Transfer size: 4096 bytes 00:07:02.538 Vector count 1 00:07:02.538 Module: software 00:07:02.538 Queue depth: 32 00:07:02.538 Allocate depth: 32 00:07:02.538 # threads/core: 1 00:07:02.538 Run time: 1 seconds 00:07:02.538 Verify: Yes 00:07:02.538 00:07:02.538 Running for 1 seconds... 00:07:02.538 00:07:02.538 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.538 ------------------------------------------------------------------------------------ 00:07:02.538 0,0 437056/s 1707 MiB/s 0 0 00:07:02.538 ==================================================================================== 00:07:02.538 Total 437056/s 1707 MiB/s 0 0' 00:07:02.538 17:17:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:02.538 17:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.538 17:17:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:02.538 17:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.538 17:17:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.538 17:17:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.538 17:17:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.538 17:17:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.538 17:17:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.538 17:17:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.538 17:17:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.538 17:17:10 -- accel/accel.sh@42 -- # jq -r . 00:07:02.538 [2024-10-13 17:17:10.995621] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.538 [2024-10-13 17:17:10.995664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990329 ] 00:07:02.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.538 [2024-10-13 17:17:11.047163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.798 [2024-10-13 17:17:11.074899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=0x1 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=compare 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=software 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=32 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=32 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=1 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val=Yes 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.799 17:17:11 -- accel/accel.sh@21 -- # val= 00:07:02.799 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.799 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@21 -- # val= 00:07:03.742 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@21 -- # val= 00:07:03.742 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@21 -- # val= 00:07:03.742 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@21 -- # val= 00:07:03.742 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@21 -- # val= 00:07:03.742 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@21 -- # val= 00:07:03.742 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:03.742 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.742 17:17:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.742 17:17:12 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:03.742 17:17:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.742 00:07:03.742 real 0m2.442s 00:07:03.742 user 0m2.254s 00:07:03.742 sys 0m0.194s 00:07:03.742 17:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.742 17:17:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.742 ************************************ 00:07:03.742 END TEST accel_compare 00:07:03.742 ************************************ 00:07:03.742 17:17:12 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:03.742 17:17:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:03.742 17:17:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.742 17:17:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.742 ************************************ 00:07:03.742 START TEST accel_xor 00:07:03.742 ************************************ 00:07:03.742 17:17:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:03.742 17:17:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.742 17:17:12 -- accel/accel.sh@17 -- # local accel_module 00:07:03.742 17:17:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:03.742 17:17:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:03.742 17:17:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.742 17:17:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.742 17:17:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.742 17:17:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.742 17:17:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.742 17:17:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.742 17:17:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.742 17:17:12 -- accel/accel.sh@42 -- # jq -r . 00:07:03.742 [2024-10-13 17:17:12.262117] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:03.742 [2024-10-13 17:17:12.262193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990681 ] 00:07:04.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.002 [2024-10-13 17:17:12.325181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.002 [2024-10-13 17:17:12.354230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.943 17:17:13 -- accel/accel.sh@18 -- # out=' 00:07:04.943 SPDK Configuration: 00:07:04.943 Core mask: 0x1 00:07:04.943 00:07:04.943 Accel Perf Configuration: 00:07:04.943 Workload Type: xor 00:07:04.943 Source buffers: 2 00:07:04.943 Transfer size: 4096 bytes 00:07:04.943 Vector count 1 00:07:04.943 Module: software 00:07:04.943 Queue depth: 32 00:07:04.943 Allocate depth: 32 00:07:04.943 # threads/core: 1 00:07:04.943 Run time: 1 seconds 00:07:04.943 Verify: Yes 00:07:04.943 00:07:04.943 Running for 1 seconds... 00:07:04.943 00:07:04.943 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.943 ------------------------------------------------------------------------------------ 00:07:04.943 0,0 357696/s 1397 MiB/s 0 0 00:07:04.943 ==================================================================================== 00:07:04.943 Total 357696/s 1397 MiB/s 0 0' 00:07:04.943 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.943 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.943 17:17:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:05.204 17:17:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:05.204 17:17:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.204 17:17:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.204 17:17:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.204 17:17:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.204 17:17:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.204 17:17:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.204 17:17:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.204 17:17:13 -- accel/accel.sh@42 -- # jq -r . 00:07:05.204 [2024-10-13 17:17:13.491861] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.204 [2024-10-13 17:17:13.491936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990949 ] 00:07:05.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.204 [2024-10-13 17:17:13.554511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.204 [2024-10-13 17:17:13.582984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=0x1 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=xor 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=2 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=software 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=32 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=32 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=1 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val=Yes 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.204 17:17:13 -- accel/accel.sh@21 -- # val= 00:07:05.204 17:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.204 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@21 -- # val= 00:07:06.588 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@21 -- # val= 00:07:06.588 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@21 -- # val= 00:07:06.588 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@21 -- # val= 00:07:06.588 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@21 -- # val= 00:07:06.588 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@21 -- # val= 00:07:06.588 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.588 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.588 17:17:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.588 17:17:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:06.588 17:17:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.588 00:07:06.588 real 0m2.465s 00:07:06.588 user 0m2.274s 00:07:06.588 sys 0m0.198s 00:07:06.588 17:17:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.589 17:17:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.589 ************************************ 00:07:06.589 END TEST accel_xor 00:07:06.589 ************************************ 00:07:06.589 17:17:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:06.589 17:17:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:06.589 17:17:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.589 17:17:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.589 ************************************ 00:07:06.589 START TEST accel_xor 00:07:06.589 ************************************ 00:07:06.589 17:17:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:06.589 17:17:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.589 17:17:14 -- accel/accel.sh@17 -- # local accel_module 00:07:06.589 17:17:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:06.589 17:17:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:06.589 17:17:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.589 17:17:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.589 17:17:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.589 17:17:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.589 17:17:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.589 17:17:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.589 17:17:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.589 17:17:14 -- accel/accel.sh@42 -- # jq -r . 00:07:06.589 [2024-10-13 17:17:14.771448] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.589 [2024-10-13 17:17:14.771519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991094 ] 00:07:06.589 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.589 [2024-10-13 17:17:14.835054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.589 [2024-10-13 17:17:14.865051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.530 17:17:15 -- accel/accel.sh@18 -- # out=' 00:07:07.530 SPDK Configuration: 00:07:07.530 Core mask: 0x1 00:07:07.530 00:07:07.530 Accel Perf Configuration: 00:07:07.530 Workload Type: xor 00:07:07.530 Source buffers: 3 00:07:07.530 Transfer size: 4096 bytes 00:07:07.530 Vector count 1 00:07:07.530 Module: software 00:07:07.530 Queue depth: 32 00:07:07.530 Allocate depth: 32 00:07:07.530 # threads/core: 1 00:07:07.530 Run time: 1 seconds 00:07:07.530 Verify: Yes 00:07:07.530 00:07:07.530 Running for 1 seconds... 00:07:07.530 00:07:07.530 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.530 ------------------------------------------------------------------------------------ 00:07:07.530 0,0 344608/s 1346 MiB/s 0 0 00:07:07.530 ==================================================================================== 00:07:07.530 Total 344608/s 1346 MiB/s 0 0' 00:07:07.530 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.530 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.530 17:17:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:07.530 17:17:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:07.530 17:17:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.530 17:17:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.530 17:17:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.530 17:17:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.530 17:17:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.530 17:17:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.530 17:17:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.530 17:17:15 -- accel/accel.sh@42 -- # jq -r . 00:07:07.530 [2024-10-13 17:17:16.005789] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:07.530 [2024-10-13 17:17:16.005882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991405 ] 00:07:07.530 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.791 [2024-10-13 17:17:16.069012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.791 [2024-10-13 17:17:16.098220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=0x1 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=xor 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=3 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=software 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=32 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=32 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=1 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val=Yes 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:07.791 17:17:16 -- accel/accel.sh@21 -- # val= 00:07:07.791 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:07.791 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@21 -- # val= 00:07:08.733 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@21 -- # val= 00:07:08.733 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@21 -- # val= 00:07:08.733 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@21 -- # val= 00:07:08.733 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@21 -- # val= 00:07:08.733 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@21 -- # val= 00:07:08.733 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.733 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.733 17:17:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.733 17:17:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:08.733 17:17:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.733 00:07:08.733 real 0m2.471s 00:07:08.733 user 0m2.273s 00:07:08.733 sys 0m0.205s 00:07:08.733 17:17:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.733 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.733 ************************************ 00:07:08.733 END TEST accel_xor 00:07:08.733 ************************************ 00:07:08.733 17:17:17 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:08.733 17:17:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:08.733 17:17:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.733 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.733 ************************************ 00:07:08.733 START TEST accel_dif_verify 00:07:08.733 ************************************ 00:07:08.994 17:17:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:08.994 17:17:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.994 17:17:17 -- accel/accel.sh@17 -- # local accel_module 00:07:08.994 17:17:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:08.994 17:17:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:08.994 17:17:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.994 17:17:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.994 17:17:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.994 17:17:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.994 17:17:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.994 17:17:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.994 17:17:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.994 17:17:17 -- accel/accel.sh@42 -- # jq -r . 00:07:08.994 [2024-10-13 17:17:17.286351] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.994 [2024-10-13 17:17:17.286421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991752 ] 00:07:08.994 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.994 [2024-10-13 17:17:17.349663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.994 [2024-10-13 17:17:17.379632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.384 17:17:18 -- accel/accel.sh@18 -- # out=' 00:07:10.384 SPDK Configuration: 00:07:10.384 Core mask: 0x1 00:07:10.384 00:07:10.384 Accel Perf Configuration: 00:07:10.384 Workload Type: dif_verify 00:07:10.384 Vector size: 4096 bytes 00:07:10.384 Transfer size: 4096 bytes 00:07:10.384 Block size: 512 bytes 00:07:10.384 Metadata size: 8 bytes 00:07:10.384 Vector count 1 00:07:10.384 Module: software 00:07:10.384 Queue depth: 32 00:07:10.384 Allocate depth: 32 00:07:10.384 # threads/core: 1 00:07:10.384 Run time: 1 seconds 00:07:10.384 Verify: No 00:07:10.384 00:07:10.384 Running for 1 seconds... 00:07:10.384 00:07:10.384 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.384 ------------------------------------------------------------------------------------ 00:07:10.384 0,0 94752/s 375 MiB/s 0 0 00:07:10.384 ==================================================================================== 00:07:10.384 Total 94752/s 370 MiB/s 0 0' 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:10.384 17:17:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:10.384 17:17:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.384 17:17:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.384 17:17:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.384 17:17:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.384 17:17:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.384 17:17:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.384 17:17:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.384 17:17:18 -- accel/accel.sh@42 -- # jq -r . 00:07:10.384 [2024-10-13 17:17:18.519912] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:10.384 [2024-10-13 17:17:18.520008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991991 ] 00:07:10.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.384 [2024-10-13 17:17:18.582611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.384 [2024-10-13 17:17:18.611402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val=0x1 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val=dif_verify 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.384 17:17:18 -- accel/accel.sh@21 -- # val=software 00:07:10.384 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.384 17:17:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.384 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val=32 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val=32 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val=1 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val=No 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.385 17:17:18 -- accel/accel.sh@21 -- # val= 00:07:10.385 17:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.385 17:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.325 17:17:19 -- accel/accel.sh@21 -- # val= 00:07:11.325 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.325 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.326 17:17:19 -- accel/accel.sh@21 -- # val= 00:07:11.326 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.326 17:17:19 -- accel/accel.sh@21 -- # val= 00:07:11.326 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.326 17:17:19 -- accel/accel.sh@21 -- # val= 00:07:11.326 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.326 17:17:19 -- accel/accel.sh@21 -- # val= 00:07:11.326 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.326 17:17:19 -- accel/accel.sh@21 -- # val= 00:07:11.326 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.326 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.326 17:17:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.326 17:17:19 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:11.326 17:17:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.326 00:07:11.326 real 0m2.471s 00:07:11.326 user 0m2.265s 00:07:11.326 sys 0m0.214s 00:07:11.326 17:17:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.326 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.326 ************************************ 00:07:11.326 END TEST accel_dif_verify 00:07:11.326 ************************************ 00:07:11.326 17:17:19 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:11.326 17:17:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:11.326 17:17:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.326 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.326 ************************************ 00:07:11.326 START TEST accel_dif_generate 00:07:11.326 ************************************ 00:07:11.326 17:17:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:11.326 17:17:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.326 17:17:19 -- accel/accel.sh@17 -- # local accel_module 00:07:11.326 17:17:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:11.326 17:17:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:11.326 17:17:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.326 17:17:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.326 17:17:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.326 17:17:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.326 17:17:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.326 17:17:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.326 17:17:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.326 17:17:19 -- accel/accel.sh@42 -- # jq -r . 00:07:11.326 [2024-10-13 17:17:19.802110] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.326 [2024-10-13 17:17:19.802200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992145 ] 00:07:11.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.587 [2024-10-13 17:17:19.865563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.587 [2024-10-13 17:17:19.894374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.528 17:17:21 -- accel/accel.sh@18 -- # out=' 00:07:12.528 SPDK Configuration: 00:07:12.528 Core mask: 0x1 00:07:12.528 00:07:12.528 Accel Perf Configuration: 00:07:12.528 Workload Type: dif_generate 00:07:12.528 Vector size: 4096 bytes 00:07:12.528 Transfer size: 4096 bytes 00:07:12.528 Block size: 512 bytes 00:07:12.528 Metadata size: 8 bytes 00:07:12.528 Vector count 1 00:07:12.528 Module: software 00:07:12.528 Queue depth: 32 00:07:12.528 Allocate depth: 32 00:07:12.528 # threads/core: 1 00:07:12.528 Run time: 1 seconds 00:07:12.528 Verify: No 00:07:12.528 00:07:12.528 Running for 1 seconds... 00:07:12.528 00:07:12.528 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.528 ------------------------------------------------------------------------------------ 00:07:12.528 0,0 114656/s 454 MiB/s 0 0 00:07:12.528 ==================================================================================== 00:07:12.528 Total 114656/s 447 MiB/s 0 0' 00:07:12.528 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.528 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.528 17:17:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.528 17:17:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.528 17:17:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.528 17:17:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.528 17:17:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.528 17:17:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.528 17:17:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.528 17:17:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.528 17:17:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.528 17:17:21 -- accel/accel.sh@42 -- # jq -r . 00:07:12.528 [2024-10-13 17:17:21.034524] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.528 [2024-10-13 17:17:21.034594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992462 ] 00:07:12.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.789 [2024-10-13 17:17:21.096487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.789 [2024-10-13 17:17:21.124375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.789 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.789 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.789 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=0x1 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=dif_generate 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=software 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=32 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=32 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=1 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val=No 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:12.790 17:17:21 -- accel/accel.sh@21 -- # val= 00:07:12.790 17:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:12.790 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@21 -- # val= 00:07:13.741 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@21 -- # val= 00:07:13.741 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@21 -- # val= 00:07:13.741 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@21 -- # val= 00:07:13.741 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@21 -- # val= 00:07:13.741 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@21 -- # val= 00:07:13.741 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:07:13.741 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:07:13.741 17:17:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.741 17:17:22 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:13.741 17:17:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.741 00:07:13.741 real 0m2.468s 00:07:13.741 user 0m2.285s 00:07:13.741 sys 0m0.191s 00:07:13.741 17:17:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.741 17:17:22 -- common/autotest_common.sh@10 -- # set +x 00:07:13.741 ************************************ 00:07:13.741 END TEST accel_dif_generate 00:07:13.741 ************************************ 00:07:14.040 17:17:22 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:14.040 17:17:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:14.040 17:17:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.040 17:17:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.040 ************************************ 00:07:14.040 START TEST accel_dif_generate_copy 00:07:14.040 ************************************ 00:07:14.040 17:17:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:14.040 17:17:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.040 17:17:22 -- accel/accel.sh@17 -- # local accel_module 00:07:14.040 17:17:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:14.040 17:17:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:14.040 17:17:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.040 17:17:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.040 17:17:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.040 17:17:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.040 17:17:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.040 17:17:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.040 17:17:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.040 17:17:22 -- accel/accel.sh@42 -- # jq -r . 00:07:14.040 [2024-10-13 17:17:22.312806] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.040 [2024-10-13 17:17:22.312888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992812 ] 00:07:14.040 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.040 [2024-10-13 17:17:22.375916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.041 [2024-10-13 17:17:22.404512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.006 17:17:23 -- accel/accel.sh@18 -- # out=' 00:07:15.007 SPDK Configuration: 00:07:15.007 Core mask: 0x1 00:07:15.007 00:07:15.007 Accel Perf Configuration: 00:07:15.007 Workload Type: dif_generate_copy 00:07:15.007 Vector size: 4096 bytes 00:07:15.007 Transfer size: 4096 bytes 00:07:15.007 Vector count 1 00:07:15.007 Module: software 00:07:15.007 Queue depth: 32 00:07:15.007 Allocate depth: 32 00:07:15.007 # threads/core: 1 00:07:15.007 Run time: 1 seconds 00:07:15.007 Verify: No 00:07:15.007 00:07:15.007 Running for 1 seconds... 00:07:15.007 00:07:15.007 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.007 ------------------------------------------------------------------------------------ 00:07:15.007 0,0 87296/s 346 MiB/s 0 0 00:07:15.007 ==================================================================================== 00:07:15.007 Total 87296/s 341 MiB/s 0 0' 00:07:15.007 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 17:17:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.007 17:17:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.007 17:17:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.007 17:17:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.007 17:17:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.007 17:17:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.007 17:17:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.007 17:17:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.007 17:17:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.007 17:17:23 -- accel/accel.sh@42 -- # jq -r . 00:07:15.268 [2024-10-13 17:17:23.543212] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:15.268 [2024-10-13 17:17:23.543294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993005 ] 00:07:15.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.268 [2024-10-13 17:17:23.618241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.268 [2024-10-13 17:17:23.647336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=0x1 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=software 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=32 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=32 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=1 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val=No 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:15.268 17:17:23 -- accel/accel.sh@21 -- # val= 00:07:15.268 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:07:15.268 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.651 17:17:24 -- accel/accel.sh@21 -- # val= 00:07:16.651 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.651 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:07:16.651 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:07:16.651 17:17:24 -- accel/accel.sh@21 -- # val= 00:07:16.651 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.651 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:07:16.651 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:07:16.651 17:17:24 -- accel/accel.sh@21 -- # val= 00:07:16.651 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.651 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:07:16.652 17:17:24 -- accel/accel.sh@21 -- # val= 00:07:16.652 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:07:16.652 17:17:24 -- accel/accel.sh@21 -- # val= 00:07:16.652 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:07:16.652 17:17:24 -- accel/accel.sh@21 -- # val= 00:07:16.652 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:07:16.652 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:07:16.652 17:17:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.652 17:17:24 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:16.652 17:17:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.652 00:07:16.652 real 0m2.479s 00:07:16.652 user 0m2.274s 00:07:16.652 sys 0m0.211s 00:07:16.652 17:17:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.652 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:07:16.652 ************************************ 00:07:16.652 END TEST accel_dif_generate_copy 00:07:16.652 ************************************ 00:07:16.652 17:17:24 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:16.652 17:17:24 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.652 17:17:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:16.652 17:17:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.652 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:07:16.652 ************************************ 00:07:16.652 START TEST accel_comp 00:07:16.652 ************************************ 00:07:16.652 17:17:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.652 17:17:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.652 17:17:24 -- accel/accel.sh@17 -- # local accel_module 00:07:16.652 17:17:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.652 17:17:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.652 17:17:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.652 17:17:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.652 17:17:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.652 17:17:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.652 17:17:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.652 17:17:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.652 17:17:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.652 17:17:24 -- accel/accel.sh@42 -- # jq -r . 00:07:16.652 [2024-10-13 17:17:24.836375] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:16.652 [2024-10-13 17:17:24.836468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993189 ] 00:07:16.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.652 [2024-10-13 17:17:24.901107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.652 [2024-10-13 17:17:24.931076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.592 17:17:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.592 00:07:17.592 SPDK Configuration: 00:07:17.592 Core mask: 0x1 00:07:17.592 00:07:17.592 Accel Perf Configuration: 00:07:17.592 Workload Type: compress 00:07:17.592 Transfer size: 4096 bytes 00:07:17.592 Vector count 1 00:07:17.592 Module: software 00:07:17.592 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.592 Queue depth: 32 00:07:17.592 Allocate depth: 32 00:07:17.592 # threads/core: 1 00:07:17.592 Run time: 1 seconds 00:07:17.592 Verify: No 00:07:17.592 00:07:17.592 Running for 1 seconds... 00:07:17.592 00:07:17.592 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.592 ------------------------------------------------------------------------------------ 00:07:17.592 0,0 47488/s 197 MiB/s 0 0 00:07:17.592 ==================================================================================== 00:07:17.592 Total 47488/s 185 MiB/s 0 0' 00:07:17.592 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 17:17:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.592 17:17:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.592 17:17:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.592 17:17:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.592 17:17:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.592 17:17:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.592 17:17:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.592 17:17:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.592 17:17:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.592 17:17:26 -- accel/accel.sh@42 -- # jq -r . 00:07:17.592 [2024-10-13 17:17:26.074908] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:17.592 [2024-10-13 17:17:26.074999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993526 ] 00:07:17.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.853 [2024-10-13 17:17:26.138844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.853 [2024-10-13 17:17:26.166718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=0x1 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=compress 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=software 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=32 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=32 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=1 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val=No 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.853 17:17:26 -- accel/accel.sh@21 -- # val= 00:07:17.853 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.853 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@21 -- # val= 00:07:18.796 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@21 -- # val= 00:07:18.796 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@21 -- # val= 00:07:18.796 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@21 -- # val= 00:07:18.796 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@21 -- # val= 00:07:18.796 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@21 -- # val= 00:07:18.796 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:07:18.796 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:07:18.796 17:17:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.796 17:17:27 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:18.796 17:17:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.796 00:07:18.796 real 0m2.478s 00:07:18.796 user 0m2.276s 00:07:18.796 sys 0m0.209s 00:07:18.796 17:17:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.796 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.796 ************************************ 00:07:18.796 END TEST accel_comp 00:07:18.796 ************************************ 00:07:19.056 17:17:27 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.056 17:17:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:19.056 17:17:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.056 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:07:19.056 ************************************ 00:07:19.056 START TEST accel_decomp 00:07:19.056 ************************************ 00:07:19.056 17:17:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.056 17:17:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.056 17:17:27 -- accel/accel.sh@17 -- # local accel_module 00:07:19.056 17:17:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.056 17:17:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.056 17:17:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.056 17:17:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.056 17:17:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.056 17:17:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.056 17:17:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.056 17:17:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.056 17:17:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.056 17:17:27 -- accel/accel.sh@42 -- # jq -r . 00:07:19.056 [2024-10-13 17:17:27.357879] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:19.057 [2024-10-13 17:17:27.357978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993875 ] 00:07:19.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.057 [2024-10-13 17:17:27.420758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.057 [2024-10-13 17:17:27.450208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.448 17:17:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.448 00:07:20.448 SPDK Configuration: 00:07:20.448 Core mask: 0x1 00:07:20.448 00:07:20.448 Accel Perf Configuration: 00:07:20.448 Workload Type: decompress 00:07:20.448 Transfer size: 4096 bytes 00:07:20.448 Vector count 1 00:07:20.448 Module: software 00:07:20.448 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.448 Queue depth: 32 00:07:20.448 Allocate depth: 32 00:07:20.448 # threads/core: 1 00:07:20.448 Run time: 1 seconds 00:07:20.448 Verify: Yes 00:07:20.448 00:07:20.448 Running for 1 seconds... 00:07:20.448 00:07:20.448 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.448 ------------------------------------------------------------------------------------ 00:07:20.448 0,0 62688/s 115 MiB/s 0 0 00:07:20.448 ==================================================================================== 00:07:20.448 Total 62688/s 244 MiB/s 0 0' 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.448 17:17:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.448 17:17:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.448 17:17:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.448 17:17:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.448 17:17:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.448 17:17:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.448 17:17:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.448 17:17:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.448 17:17:28 -- accel/accel.sh@42 -- # jq -r . 00:07:20.448 [2024-10-13 17:17:28.593748] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:20.448 [2024-10-13 17:17:28.593840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994045 ] 00:07:20.448 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.448 [2024-10-13 17:17:28.658136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.448 [2024-10-13 17:17:28.687190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=0x1 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=decompress 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=software 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=32 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=32 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.448 17:17:28 -- accel/accel.sh@21 -- # val=1 00:07:20.448 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.448 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.449 17:17:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.449 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.449 17:17:28 -- accel/accel.sh@21 -- # val=Yes 00:07:20.449 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.449 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.449 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:20.449 17:17:28 -- accel/accel.sh@21 -- # val= 00:07:20.449 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:07:20.449 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@21 -- # val= 00:07:21.390 17:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # IFS=: 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@21 -- # val= 00:07:21.390 17:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # IFS=: 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@21 -- # val= 00:07:21.390 17:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # IFS=: 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@21 -- # val= 00:07:21.390 17:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # IFS=: 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@21 -- # val= 00:07:21.390 17:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # IFS=: 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@21 -- # val= 00:07:21.390 17:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # IFS=: 00:07:21.390 17:17:29 -- accel/accel.sh@20 -- # read -r var val 00:07:21.390 17:17:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.390 17:17:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.390 17:17:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.390 00:07:21.390 real 0m2.478s 00:07:21.390 user 0m2.279s 00:07:21.390 sys 0m0.208s 00:07:21.390 17:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.390 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:07:21.390 ************************************ 00:07:21.390 END TEST accel_decomp 00:07:21.390 ************************************ 00:07:21.390 17:17:29 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.390 17:17:29 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:21.390 17:17:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.390 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:07:21.390 ************************************ 00:07:21.390 START TEST accel_decmop_full 00:07:21.390 ************************************ 00:07:21.390 17:17:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.390 17:17:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.390 17:17:29 -- accel/accel.sh@17 -- # local accel_module 00:07:21.390 17:17:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.390 17:17:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.390 17:17:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.390 17:17:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.390 17:17:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.390 17:17:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.390 17:17:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.390 17:17:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.390 17:17:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.390 17:17:29 -- accel/accel.sh@42 -- # jq -r . 00:07:21.390 [2024-10-13 17:17:29.876095] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:21.390 [2024-10-13 17:17:29.876193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994251 ] 00:07:21.390 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.651 [2024-10-13 17:17:29.939517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.651 [2024-10-13 17:17:29.968078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.592 17:17:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.592 00:07:22.592 SPDK Configuration: 00:07:22.592 Core mask: 0x1 00:07:22.592 00:07:22.592 Accel Perf Configuration: 00:07:22.592 Workload Type: decompress 00:07:22.592 Transfer size: 111250 bytes 00:07:22.592 Vector count 1 00:07:22.592 Module: software 00:07:22.592 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.592 Queue depth: 32 00:07:22.592 Allocate depth: 32 00:07:22.592 # threads/core: 1 00:07:22.592 Run time: 1 seconds 00:07:22.592 Verify: Yes 00:07:22.592 00:07:22.592 Running for 1 seconds... 00:07:22.592 00:07:22.592 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.592 ------------------------------------------------------------------------------------ 00:07:22.592 0,0 4064/s 167 MiB/s 0 0 00:07:22.592 ==================================================================================== 00:07:22.592 Total 4064/s 431 MiB/s 0 0' 00:07:22.592 17:17:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.592 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.592 17:17:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.592 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.592 17:17:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.592 17:17:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.592 17:17:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.592 17:17:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.592 17:17:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.592 17:17:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.592 17:17:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.592 17:17:31 -- accel/accel.sh@42 -- # jq -r . 00:07:22.592 [2024-10-13 17:17:31.103526] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.592 [2024-10-13 17:17:31.103571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994585 ] 00:07:22.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.853 [2024-10-13 17:17:31.155047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.853 [2024-10-13 17:17:31.182920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=0x1 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=decompress 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=software 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=32 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=32 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=1 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val=Yes 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.853 17:17:31 -- accel/accel.sh@21 -- # val= 00:07:22.853 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:07:22.853 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@21 -- # val= 00:07:23.795 17:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@21 -- # val= 00:07:23.795 17:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@21 -- # val= 00:07:23.795 17:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@21 -- # val= 00:07:23.795 17:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@21 -- # val= 00:07:23.795 17:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@21 -- # val= 00:07:23.795 17:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.795 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.795 17:17:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.795 17:17:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.795 17:17:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.795 00:07:23.795 real 0m2.464s 00:07:23.795 user 0m2.292s 00:07:23.795 sys 0m0.179s 00:07:23.795 17:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.795 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:07:23.795 ************************************ 00:07:23.795 END TEST accel_decmop_full 00:07:23.795 ************************************ 00:07:24.056 17:17:32 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.056 17:17:32 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:24.056 17:17:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.056 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.056 ************************************ 00:07:24.056 START TEST accel_decomp_mcore 00:07:24.056 ************************************ 00:07:24.056 17:17:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.056 17:17:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.056 17:17:32 -- accel/accel.sh@17 -- # local accel_module 00:07:24.056 17:17:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.056 17:17:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.056 17:17:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.056 17:17:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.056 17:17:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.056 17:17:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.056 17:17:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.056 17:17:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.056 17:17:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.056 17:17:32 -- accel/accel.sh@42 -- # jq -r . 00:07:24.056 [2024-10-13 17:17:32.379173] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.056 [2024-10-13 17:17:32.379247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994932 ] 00:07:24.056 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.056 [2024-10-13 17:17:32.443105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.056 [2024-10-13 17:17:32.474523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.056 [2024-10-13 17:17:32.474652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.056 [2024-10-13 17:17:32.474805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.056 [2024-10-13 17:17:32.474806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.441 17:17:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.441 00:07:25.441 SPDK Configuration: 00:07:25.441 Core mask: 0xf 00:07:25.441 00:07:25.441 Accel Perf Configuration: 00:07:25.441 Workload Type: decompress 00:07:25.441 Transfer size: 4096 bytes 00:07:25.441 Vector count 1 00:07:25.441 Module: software 00:07:25.441 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.441 Queue depth: 32 00:07:25.441 Allocate depth: 32 00:07:25.441 # threads/core: 1 00:07:25.441 Run time: 1 seconds 00:07:25.441 Verify: Yes 00:07:25.441 00:07:25.441 Running for 1 seconds... 00:07:25.441 00:07:25.441 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.441 ------------------------------------------------------------------------------------ 00:07:25.441 0,0 58240/s 107 MiB/s 0 0 00:07:25.441 3,0 58656/s 108 MiB/s 0 0 00:07:25.441 2,0 86016/s 158 MiB/s 0 0 00:07:25.441 1,0 58144/s 107 MiB/s 0 0 00:07:25.441 ==================================================================================== 00:07:25.441 Total 261056/s 1019 MiB/s 0 0' 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.441 17:17:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.441 17:17:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.441 17:17:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.441 17:17:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.441 17:17:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.441 17:17:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.441 17:17:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.441 17:17:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.441 17:17:33 -- accel/accel.sh@42 -- # jq -r . 00:07:25.441 [2024-10-13 17:17:33.621152] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:25.441 [2024-10-13 17:17:33.621226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995097 ] 00:07:25.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.441 [2024-10-13 17:17:33.685160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.441 [2024-10-13 17:17:33.715927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.441 [2024-10-13 17:17:33.716042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.441 [2024-10-13 17:17:33.716197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.441 [2024-10-13 17:17:33.716197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=0xf 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=decompress 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=software 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=32 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=32 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=1 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val=Yes 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.441 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.442 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.442 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:25.442 17:17:33 -- accel/accel.sh@21 -- # val= 00:07:25.442 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.442 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:07:25.442 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@21 -- # val= 00:07:26.383 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.383 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.383 17:17:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.383 17:17:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.383 17:17:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.383 00:07:26.383 real 0m2.492s 00:07:26.383 user 0m8.732s 00:07:26.383 sys 0m0.236s 00:07:26.383 17:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.383 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:07:26.383 ************************************ 00:07:26.383 END TEST accel_decomp_mcore 00:07:26.383 ************************************ 00:07:26.383 17:17:34 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.383 17:17:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:26.383 17:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.383 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:07:26.383 ************************************ 00:07:26.383 START TEST accel_decomp_full_mcore 00:07:26.383 ************************************ 00:07:26.383 17:17:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.383 17:17:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.383 17:17:34 -- accel/accel.sh@17 -- # local accel_module 00:07:26.383 17:17:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.383 17:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.383 17:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.383 17:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.383 17:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.383 17:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.383 17:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.383 17:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.383 17:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.383 17:17:34 -- accel/accel.sh@42 -- # jq -r . 00:07:26.645 [2024-10-13 17:17:34.915619] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.645 [2024-10-13 17:17:34.915694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995306 ] 00:07:26.645 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.645 [2024-10-13 17:17:34.980126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.645 [2024-10-13 17:17:35.012012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.645 [2024-10-13 17:17:35.012149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.645 [2024-10-13 17:17:35.012354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.645 [2024-10-13 17:17:35.012354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.029 17:17:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.029 00:07:28.029 SPDK Configuration: 00:07:28.029 Core mask: 0xf 00:07:28.029 00:07:28.029 Accel Perf Configuration: 00:07:28.029 Workload Type: decompress 00:07:28.029 Transfer size: 111250 bytes 00:07:28.029 Vector count 1 00:07:28.029 Module: software 00:07:28.029 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.029 Queue depth: 32 00:07:28.029 Allocate depth: 32 00:07:28.029 # threads/core: 1 00:07:28.029 Run time: 1 seconds 00:07:28.029 Verify: Yes 00:07:28.029 00:07:28.029 Running for 1 seconds... 00:07:28.029 00:07:28.029 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.029 ------------------------------------------------------------------------------------ 00:07:28.029 0,0 4064/s 167 MiB/s 0 0 00:07:28.029 3,0 4096/s 169 MiB/s 0 0 00:07:28.029 2,0 5888/s 243 MiB/s 0 0 00:07:28.029 1,0 4064/s 167 MiB/s 0 0 00:07:28.029 ==================================================================================== 00:07:28.029 Total 18112/s 1921 MiB/s 0 0' 00:07:28.029 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.029 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.029 17:17:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.029 17:17:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.030 17:17:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.030 17:17:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.030 17:17:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.030 17:17:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.030 17:17:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.030 17:17:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.030 17:17:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.030 17:17:36 -- accel/accel.sh@42 -- # jq -r . 00:07:28.030 [2024-10-13 17:17:36.173871] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:28.030 [2024-10-13 17:17:36.173961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995649 ] 00:07:28.030 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.030 [2024-10-13 17:17:36.237986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.030 [2024-10-13 17:17:36.269024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.030 [2024-10-13 17:17:36.269132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.030 [2024-10-13 17:17:36.269229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.030 [2024-10-13 17:17:36.269230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=0xf 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=decompress 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=software 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=32 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=32 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=1 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val=Yes 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.030 17:17:36 -- accel/accel.sh@21 -- # val= 00:07:28.030 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:07:28.030 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@21 -- # val= 00:07:28.970 17:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.970 17:17:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.970 17:17:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.970 17:17:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.970 17:17:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.970 00:07:28.970 real 0m2.519s 00:07:28.970 user 0m8.845s 00:07:28.970 sys 0m0.218s 00:07:28.970 17:17:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.970 17:17:37 -- common/autotest_common.sh@10 -- # set +x 00:07:28.970 ************************************ 00:07:28.970 END TEST accel_decomp_full_mcore 00:07:28.970 ************************************ 00:07:28.970 17:17:37 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.970 17:17:37 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:28.970 17:17:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.970 17:17:37 -- common/autotest_common.sh@10 -- # set +x 00:07:28.970 ************************************ 00:07:28.970 START TEST accel_decomp_mthread 00:07:28.970 ************************************ 00:07:28.970 17:17:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.970 17:17:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.970 17:17:37 -- accel/accel.sh@17 -- # local accel_module 00:07:28.970 17:17:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.970 17:17:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.970 17:17:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.970 17:17:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.970 17:17:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.970 17:17:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.970 17:17:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.970 17:17:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.970 17:17:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.970 17:17:37 -- accel/accel.sh@42 -- # jq -r . 00:07:28.970 [2024-10-13 17:17:37.479110] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:28.970 [2024-10-13 17:17:37.479199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996006 ] 00:07:29.230 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.230 [2024-10-13 17:17:37.541757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.230 [2024-10-13 17:17:37.570200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.170 17:17:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.170 00:07:30.170 SPDK Configuration: 00:07:30.170 Core mask: 0x1 00:07:30.170 00:07:30.170 Accel Perf Configuration: 00:07:30.170 Workload Type: decompress 00:07:30.170 Transfer size: 4096 bytes 00:07:30.170 Vector count 1 00:07:30.170 Module: software 00:07:30.170 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.170 Queue depth: 32 00:07:30.170 Allocate depth: 32 00:07:30.170 # threads/core: 2 00:07:30.170 Run time: 1 seconds 00:07:30.170 Verify: Yes 00:07:30.170 00:07:30.170 Running for 1 seconds... 00:07:30.170 00:07:30.170 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.170 ------------------------------------------------------------------------------------ 00:07:30.170 0,1 31744/s 58 MiB/s 0 0 00:07:30.170 0,0 31616/s 58 MiB/s 0 0 00:07:30.170 ==================================================================================== 00:07:30.170 Total 63360/s 247 MiB/s 0 0' 00:07:30.170 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.170 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.170 17:17:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.171 17:17:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.171 17:17:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.171 17:17:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.171 17:17:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.171 17:17:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.171 17:17:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.171 17:17:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.171 17:17:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.171 17:17:38 -- accel/accel.sh@42 -- # jq -r . 00:07:30.431 [2024-10-13 17:17:38.715228] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:30.431 [2024-10-13 17:17:38.715299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996155 ] 00:07:30.431 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.431 [2024-10-13 17:17:38.777914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.431 [2024-10-13 17:17:38.806525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val=0x1 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val=decompress 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val=software 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val=32 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.431 17:17:38 -- accel/accel.sh@21 -- # val=32 00:07:30.431 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.431 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.432 17:17:38 -- accel/accel.sh@21 -- # val=2 00:07:30.432 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.432 17:17:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.432 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.432 17:17:38 -- accel/accel.sh@21 -- # val=Yes 00:07:30.432 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.432 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.432 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:30.432 17:17:38 -- accel/accel.sh@21 -- # val= 00:07:30.432 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:07:30.432 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:31.815 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:31.815 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:31.815 17:17:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.815 17:17:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.815 17:17:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.815 00:07:31.815 real 0m2.479s 00:07:31.815 user 0m2.278s 00:07:31.815 sys 0m0.208s 00:07:31.815 17:17:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.815 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 ************************************ 00:07:31.815 END TEST accel_decomp_mthread 00:07:31.815 ************************************ 00:07:31.815 17:17:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.815 17:17:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:31.815 17:17:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.815 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 ************************************ 00:07:31.815 START TEST accel_deomp_full_mthread 00:07:31.815 ************************************ 00:07:31.815 17:17:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.815 17:17:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.815 17:17:39 -- accel/accel.sh@17 -- # local accel_module 00:07:31.815 17:17:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.815 17:17:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.815 17:17:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.815 17:17:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.815 17:17:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.815 17:17:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.815 17:17:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.815 17:17:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.815 17:17:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.815 17:17:39 -- accel/accel.sh@42 -- # jq -r . 00:07:31.815 [2024-10-13 17:17:40.003148] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:31.815 [2024-10-13 17:17:40.003254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996372 ] 00:07:31.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.815 [2024-10-13 17:17:40.074053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.815 [2024-10-13 17:17:40.105768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.756 17:17:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.756 00:07:32.756 SPDK Configuration: 00:07:32.756 Core mask: 0x1 00:07:32.756 00:07:32.756 Accel Perf Configuration: 00:07:32.756 Workload Type: decompress 00:07:32.756 Transfer size: 111250 bytes 00:07:32.756 Vector count 1 00:07:32.756 Module: software 00:07:32.756 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.756 Queue depth: 32 00:07:32.756 Allocate depth: 32 00:07:32.756 # threads/core: 2 00:07:32.756 Run time: 1 seconds 00:07:32.756 Verify: Yes 00:07:32.756 00:07:32.756 Running for 1 seconds... 00:07:32.756 00:07:32.756 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.756 ------------------------------------------------------------------------------------ 00:07:32.756 0,1 2080/s 85 MiB/s 0 0 00:07:32.756 0,0 2048/s 84 MiB/s 0 0 00:07:32.756 ==================================================================================== 00:07:32.756 Total 4128/s 437 MiB/s 0 0' 00:07:32.756 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.756 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.756 17:17:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.756 17:17:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.756 17:17:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.756 17:17:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.756 17:17:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.756 17:17:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.756 17:17:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.756 17:17:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.756 17:17:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.756 17:17:41 -- accel/accel.sh@42 -- # jq -r . 00:07:32.756 [2024-10-13 17:17:41.275998] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:32.756 [2024-10-13 17:17:41.276083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996714 ] 00:07:33.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.016 [2024-10-13 17:17:41.338902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.016 [2024-10-13 17:17:41.367444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=0x1 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=decompress 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=software 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=32 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=32 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=2 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val=Yes 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:33.016 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:34.400 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:34.400 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:34.400 17:17:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.400 17:17:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.400 17:17:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.400 00:07:34.400 real 0m2.543s 00:07:34.400 user 0m2.326s 00:07:34.400 sys 0m0.223s 00:07:34.400 17:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.400 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.400 ************************************ 00:07:34.400 END TEST accel_deomp_full_mthread 00:07:34.400 ************************************ 00:07:34.400 17:17:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:34.400 17:17:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.400 17:17:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:34.400 17:17:42 -- accel/accel.sh@129 -- # build_accel_config 00:07:34.400 17:17:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.400 17:17:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.400 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.400 17:17:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.400 17:17:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.400 17:17:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.400 17:17:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.400 17:17:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.400 17:17:42 -- accel/accel.sh@42 -- # jq -r . 00:07:34.400 ************************************ 00:07:34.400 START TEST accel_dif_functional_tests 00:07:34.400 ************************************ 00:07:34.400 17:17:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.400 [2024-10-13 17:17:42.606516] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:34.400 [2024-10-13 17:17:42.606573] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997064 ] 00:07:34.400 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.400 [2024-10-13 17:17:42.668118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.400 [2024-10-13 17:17:42.699519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.400 [2024-10-13 17:17:42.699635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.400 [2024-10-13 17:17:42.699637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.400 00:07:34.400 00:07:34.400 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.400 http://cunit.sourceforge.net/ 00:07:34.400 00:07:34.400 00:07:34.400 Suite: accel_dif 00:07:34.400 Test: verify: DIF generated, GUARD check ...passed 00:07:34.400 Test: verify: DIF generated, APPTAG check ...passed 00:07:34.400 Test: verify: DIF generated, REFTAG check ...passed 00:07:34.400 Test: verify: DIF not generated, GUARD check ...[2024-10-13 17:17:42.748683] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.400 [2024-10-13 17:17:42.748722] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.400 passed 00:07:34.400 Test: verify: DIF not generated, APPTAG check ...[2024-10-13 17:17:42.748752] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.400 [2024-10-13 17:17:42.748767] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.400 passed 00:07:34.400 Test: verify: DIF not generated, REFTAG check ...[2024-10-13 17:17:42.748784] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.400 [2024-10-13 17:17:42.748799] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.400 passed 00:07:34.400 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:34.400 Test: verify: APPTAG incorrect, APPTAG check ...[2024-10-13 17:17:42.748839] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:34.400 passed 00:07:34.400 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:34.400 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:34.400 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:34.400 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-10-13 17:17:42.748950] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:34.400 passed 00:07:34.400 Test: generate copy: DIF generated, GUARD check ...passed 00:07:34.400 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:34.400 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:34.400 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:34.400 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:34.400 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:34.400 Test: generate copy: iovecs-len validate ...[2024-10-13 17:17:42.749140] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:34.400 passed 00:07:34.400 Test: generate copy: buffer alignment validate ...passed 00:07:34.400 00:07:34.400 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.400 suites 1 1 n/a 0 0 00:07:34.400 tests 20 20 20 0 0 00:07:34.400 asserts 204 204 204 0 n/a 00:07:34.400 00:07:34.400 Elapsed time = 0.002 seconds 00:07:34.400 00:07:34.400 real 0m0.288s 00:07:34.401 user 0m0.411s 00:07:34.401 sys 0m0.123s 00:07:34.401 17:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.401 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.401 ************************************ 00:07:34.401 END TEST accel_dif_functional_tests 00:07:34.401 ************************************ 00:07:34.401 00:07:34.401 real 0m52.548s 00:07:34.401 user 1m1.000s 00:07:34.401 sys 0m5.700s 00:07:34.401 17:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.401 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.401 ************************************ 00:07:34.401 END TEST accel 00:07:34.401 ************************************ 00:07:34.401 17:17:42 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:34.401 17:17:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:34.401 17:17:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.401 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.661 ************************************ 00:07:34.661 START TEST accel_rpc 00:07:34.661 ************************************ 00:07:34.661 17:17:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:34.661 * Looking for test storage... 00:07:34.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:34.661 17:17:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:34.661 17:17:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2997125 00:07:34.661 17:17:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 2997125 00:07:34.661 17:17:43 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:34.661 17:17:43 -- common/autotest_common.sh@819 -- # '[' -z 2997125 ']' 00:07:34.661 17:17:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.661 17:17:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.661 17:17:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.661 17:17:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.661 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:34.661 [2024-10-13 17:17:43.076076] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:34.661 [2024-10-13 17:17:43.076142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997125 ] 00:07:34.661 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.661 [2024-10-13 17:17:43.142178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.661 [2024-10-13 17:17:43.175898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.661 [2024-10-13 17:17:43.176051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.602 17:17:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.602 17:17:43 -- common/autotest_common.sh@852 -- # return 0 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:35.602 17:17:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:35.602 17:17:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.602 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.602 ************************************ 00:07:35.602 START TEST accel_assign_opcode 00:07:35.602 ************************************ 00:07:35.602 17:17:43 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:35.602 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.602 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.602 [2024-10-13 17:17:43.878081] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:35.602 17:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:35.602 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.602 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.602 [2024-10-13 17:17:43.890102] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:35.602 17:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.602 17:17:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:35.602 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.602 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.602 17:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.602 17:17:44 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:35.602 17:17:44 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:35.602 17:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.602 17:17:44 -- accel/accel_rpc.sh@42 -- # grep software 00:07:35.602 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.602 17:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.602 software 00:07:35.602 00:07:35.602 real 0m0.190s 00:07:35.602 user 0m0.049s 00:07:35.602 sys 0m0.008s 00:07:35.602 17:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.602 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.602 ************************************ 00:07:35.602 END TEST accel_assign_opcode 00:07:35.602 ************************************ 00:07:35.602 17:17:44 -- accel/accel_rpc.sh@55 -- # killprocess 2997125 00:07:35.602 17:17:44 -- common/autotest_common.sh@926 -- # '[' -z 2997125 ']' 00:07:35.602 17:17:44 -- common/autotest_common.sh@930 -- # kill -0 2997125 00:07:35.602 17:17:44 -- common/autotest_common.sh@931 -- # uname 00:07:35.602 17:17:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:35.602 17:17:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2997125 00:07:35.862 17:17:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:35.862 17:17:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:35.862 17:17:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2997125' 00:07:35.862 killing process with pid 2997125 00:07:35.862 17:17:44 -- common/autotest_common.sh@945 -- # kill 2997125 00:07:35.862 17:17:44 -- common/autotest_common.sh@950 -- # wait 2997125 00:07:35.862 00:07:35.862 real 0m1.430s 00:07:35.862 user 0m1.533s 00:07:35.862 sys 0m0.394s 00:07:35.862 17:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.862 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.862 ************************************ 00:07:35.862 END TEST accel_rpc 00:07:35.862 ************************************ 00:07:36.124 17:17:44 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.124 17:17:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.124 17:17:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.124 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.124 ************************************ 00:07:36.124 START TEST app_cmdline 00:07:36.124 ************************************ 00:07:36.124 17:17:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.124 * Looking for test storage... 00:07:36.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.124 17:17:44 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.124 17:17:44 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2997534 00:07:36.124 17:17:44 -- app/cmdline.sh@18 -- # waitforlisten 2997534 00:07:36.124 17:17:44 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.124 17:17:44 -- common/autotest_common.sh@819 -- # '[' -z 2997534 ']' 00:07:36.124 17:17:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.124 17:17:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.124 17:17:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.124 17:17:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.124 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.124 [2024-10-13 17:17:44.561055] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:36.124 [2024-10-13 17:17:44.561137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997534 ] 00:07:36.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.124 [2024-10-13 17:17:44.629917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.385 [2024-10-13 17:17:44.666130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.385 [2024-10-13 17:17:44.666290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.956 17:17:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.956 17:17:45 -- common/autotest_common.sh@852 -- # return 0 00:07:36.956 17:17:45 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.217 { 00:07:37.217 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:07:37.217 "fields": { 00:07:37.217 "major": 24, 00:07:37.217 "minor": 1, 00:07:37.217 "patch": 1, 00:07:37.217 "suffix": "-pre", 00:07:37.217 "commit": "726a04d70" 00:07:37.217 } 00:07:37.217 } 00:07:37.217 17:17:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.217 17:17:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.217 17:17:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.217 17:17:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.217 17:17:45 -- app/cmdline.sh@26 -- # sort 00:07:37.217 17:17:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.217 17:17:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.217 17:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.217 17:17:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.217 17:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.217 17:17:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:37.217 17:17:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:37.217 17:17:45 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.217 17:17:45 -- common/autotest_common.sh@640 -- # local es=0 00:07:37.217 17:17:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.217 17:17:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.217 17:17:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.217 17:17:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.217 17:17:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.217 17:17:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.217 17:17:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.217 17:17:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.217 17:17:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.217 17:17:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.217 request: 00:07:37.217 { 00:07:37.217 "method": "env_dpdk_get_mem_stats", 00:07:37.217 "req_id": 1 00:07:37.217 } 00:07:37.217 Got JSON-RPC error response 00:07:37.217 response: 00:07:37.217 { 00:07:37.217 "code": -32601, 00:07:37.217 "message": "Method not found" 00:07:37.217 } 00:07:37.217 17:17:45 -- common/autotest_common.sh@643 -- # es=1 00:07:37.217 17:17:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:37.217 17:17:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:37.217 17:17:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:37.217 17:17:45 -- app/cmdline.sh@1 -- # killprocess 2997534 00:07:37.217 17:17:45 -- common/autotest_common.sh@926 -- # '[' -z 2997534 ']' 00:07:37.217 17:17:45 -- common/autotest_common.sh@930 -- # kill -0 2997534 00:07:37.217 17:17:45 -- common/autotest_common.sh@931 -- # uname 00:07:37.217 17:17:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:37.217 17:17:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2997534 00:07:37.477 17:17:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:37.477 17:17:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:37.477 17:17:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2997534' 00:07:37.477 killing process with pid 2997534 00:07:37.477 17:17:45 -- common/autotest_common.sh@945 -- # kill 2997534 00:07:37.477 17:17:45 -- common/autotest_common.sh@950 -- # wait 2997534 00:07:37.477 00:07:37.477 real 0m1.590s 00:07:37.477 user 0m1.949s 00:07:37.477 sys 0m0.413s 00:07:37.477 17:17:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.477 17:17:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.477 ************************************ 00:07:37.477 END TEST app_cmdline 00:07:37.477 ************************************ 00:07:37.739 17:17:46 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:37.739 17:17:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.739 17:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.739 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:37.739 ************************************ 00:07:37.739 START TEST version 00:07:37.739 ************************************ 00:07:37.739 17:17:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:37.739 * Looking for test storage... 00:07:37.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.739 17:17:46 -- app/version.sh@17 -- # get_header_version major 00:07:37.739 17:17:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.739 17:17:46 -- app/version.sh@14 -- # cut -f2 00:07:37.739 17:17:46 -- app/version.sh@14 -- # tr -d '"' 00:07:37.739 17:17:46 -- app/version.sh@17 -- # major=24 00:07:37.739 17:17:46 -- app/version.sh@18 -- # get_header_version minor 00:07:37.739 17:17:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.739 17:17:46 -- app/version.sh@14 -- # cut -f2 00:07:37.739 17:17:46 -- app/version.sh@14 -- # tr -d '"' 00:07:37.739 17:17:46 -- app/version.sh@18 -- # minor=1 00:07:37.739 17:17:46 -- app/version.sh@19 -- # get_header_version patch 00:07:37.739 17:17:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.739 17:17:46 -- app/version.sh@14 -- # cut -f2 00:07:37.739 17:17:46 -- app/version.sh@14 -- # tr -d '"' 00:07:37.739 17:17:46 -- app/version.sh@19 -- # patch=1 00:07:37.739 17:17:46 -- app/version.sh@20 -- # get_header_version suffix 00:07:37.739 17:17:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.739 17:17:46 -- app/version.sh@14 -- # cut -f2 00:07:37.739 17:17:46 -- app/version.sh@14 -- # tr -d '"' 00:07:37.739 17:17:46 -- app/version.sh@20 -- # suffix=-pre 00:07:37.739 17:17:46 -- app/version.sh@22 -- # version=24.1 00:07:37.739 17:17:46 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:37.739 17:17:46 -- app/version.sh@25 -- # version=24.1.1 00:07:37.739 17:17:46 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:37.739 17:17:46 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.739 17:17:46 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:37.739 17:17:46 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:37.739 17:17:46 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:37.739 00:07:37.739 real 0m0.166s 00:07:37.739 user 0m0.091s 00:07:37.739 sys 0m0.113s 00:07:37.739 17:17:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.739 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:37.739 ************************************ 00:07:37.739 END TEST version 00:07:37.739 ************************************ 00:07:37.739 17:17:46 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:37.739 17:17:46 -- spdk/autotest.sh@204 -- # uname -s 00:07:37.739 17:17:46 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:37.739 17:17:46 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:37.739 17:17:46 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:37.739 17:17:46 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:37.739 17:17:46 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:37.739 17:17:46 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:37.739 17:17:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:37.739 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.000 17:17:46 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:38.000 17:17:46 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:38.000 17:17:46 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:38.000 17:17:46 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:38.000 17:17:46 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:38.000 17:17:46 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:38.000 17:17:46 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.000 17:17:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:38.000 17:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.000 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.000 ************************************ 00:07:38.000 START TEST nvmf_tcp 00:07:38.000 ************************************ 00:07:38.000 17:17:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.000 * Looking for test storage... 00:07:38.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:38.000 17:17:46 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:38.000 17:17:46 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.000 17:17:46 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.000 17:17:46 -- nvmf/common.sh@7 -- # uname -s 00:07:38.000 17:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.000 17:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.000 17:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.000 17:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.000 17:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.000 17:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.000 17:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.000 17:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.000 17:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.000 17:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.000 17:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:38.000 17:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:38.000 17:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.000 17:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.000 17:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.000 17:17:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.000 17:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.000 17:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.000 17:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.001 17:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.001 17:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.001 17:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.001 17:17:46 -- paths/export.sh@5 -- # export PATH 00:07:38.001 17:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.001 17:17:46 -- nvmf/common.sh@46 -- # : 0 00:07:38.001 17:17:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:38.001 17:17:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:38.001 17:17:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:38.001 17:17:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.001 17:17:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.001 17:17:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:38.001 17:17:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:38.001 17:17:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:38.001 17:17:46 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:38.001 17:17:46 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:38.001 17:17:46 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:38.001 17:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:38.001 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.001 17:17:46 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:38.001 17:17:46 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:38.001 17:17:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:38.001 17:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.001 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.001 ************************************ 00:07:38.001 START TEST nvmf_example 00:07:38.001 ************************************ 00:07:38.001 17:17:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:38.262 * Looking for test storage... 00:07:38.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.262 17:17:46 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.262 17:17:46 -- nvmf/common.sh@7 -- # uname -s 00:07:38.262 17:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.262 17:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.262 17:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.262 17:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.262 17:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.262 17:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.262 17:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.262 17:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.262 17:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.262 17:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.262 17:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:38.262 17:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:38.262 17:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.262 17:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.262 17:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.262 17:17:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.262 17:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.262 17:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.262 17:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.262 17:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.262 17:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.262 17:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.262 17:17:46 -- paths/export.sh@5 -- # export PATH 00:07:38.262 17:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.262 17:17:46 -- nvmf/common.sh@46 -- # : 0 00:07:38.262 17:17:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:38.262 17:17:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:38.262 17:17:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:38.262 17:17:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.262 17:17:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.262 17:17:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:38.262 17:17:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:38.262 17:17:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:38.262 17:17:46 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:38.262 17:17:46 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:38.262 17:17:46 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:38.262 17:17:46 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:38.262 17:17:46 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:38.262 17:17:46 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:38.262 17:17:46 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:38.262 17:17:46 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:38.262 17:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:38.262 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.262 17:17:46 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:38.262 17:17:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:38.262 17:17:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.262 17:17:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:38.262 17:17:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:38.262 17:17:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:38.262 17:17:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.262 17:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.262 17:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.262 17:17:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:38.262 17:17:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:38.262 17:17:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:38.262 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:44.844 17:17:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:44.844 17:17:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:44.844 17:17:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:44.844 17:17:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:44.844 17:17:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:44.844 17:17:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:44.844 17:17:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:44.844 17:17:53 -- nvmf/common.sh@294 -- # net_devs=() 00:07:44.844 17:17:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:44.844 17:17:53 -- nvmf/common.sh@295 -- # e810=() 00:07:44.844 17:17:53 -- nvmf/common.sh@295 -- # local -ga e810 00:07:44.844 17:17:53 -- nvmf/common.sh@296 -- # x722=() 00:07:44.844 17:17:53 -- nvmf/common.sh@296 -- # local -ga x722 00:07:44.844 17:17:53 -- nvmf/common.sh@297 -- # mlx=() 00:07:44.844 17:17:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:44.844 17:17:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.844 17:17:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.845 17:17:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:44.845 17:17:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:44.845 17:17:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:44.845 17:17:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:44.845 17:17:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:44.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:44.845 17:17:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:44.845 17:17:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:44.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:44.845 17:17:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:44.845 17:17:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:44.845 17:17:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.845 17:17:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:44.845 17:17:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.845 17:17:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:44.845 Found net devices under 0000:31:00.0: cvl_0_0 00:07:44.845 17:17:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.845 17:17:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:44.845 17:17:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.845 17:17:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:44.845 17:17:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.845 17:17:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:44.845 Found net devices under 0000:31:00.1: cvl_0_1 00:07:44.845 17:17:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.845 17:17:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:44.845 17:17:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:44.845 17:17:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:44.845 17:17:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:44.845 17:17:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.845 17:17:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.845 17:17:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.845 17:17:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:44.845 17:17:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.845 17:17:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.845 17:17:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:44.845 17:17:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.845 17:17:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.845 17:17:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:44.845 17:17:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:44.845 17:17:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.845 17:17:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.106 17:17:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.106 17:17:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.106 17:17:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:45.106 17:17:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.106 17:17:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.106 17:17:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.106 17:17:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:45.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:07:45.106 00:07:45.106 --- 10.0.0.2 ping statistics --- 00:07:45.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.106 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:07:45.106 17:17:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:07:45.106 00:07:45.106 --- 10.0.0.1 ping statistics --- 00:07:45.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.106 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:45.106 17:17:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.106 17:17:53 -- nvmf/common.sh@410 -- # return 0 00:07:45.106 17:17:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:45.106 17:17:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.106 17:17:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:45.106 17:17:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:45.106 17:17:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.106 17:17:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:45.106 17:17:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:45.106 17:17:53 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:45.106 17:17:53 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:45.106 17:17:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:45.106 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.106 17:17:53 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:45.106 17:17:53 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:45.106 17:17:53 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:45.106 17:17:53 -- target/nvmf_example.sh@34 -- # nvmfpid=3001711 00:07:45.106 17:17:53 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.106 17:17:53 -- target/nvmf_example.sh@36 -- # waitforlisten 3001711 00:07:45.106 17:17:53 -- common/autotest_common.sh@819 -- # '[' -z 3001711 ']' 00:07:45.106 17:17:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.106 17:17:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.106 17:17:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.106 17:17:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.106 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.940 17:17:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.940 17:17:54 -- common/autotest_common.sh@852 -- # return 0 00:07:45.940 17:17:54 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:45.940 17:17:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:45.940 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 17:17:54 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.200 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.200 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.200 17:17:54 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.200 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.200 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.200 17:17:54 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:46.200 17:17:54 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.200 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.200 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.200 17:17:54 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:46.200 17:17:54 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.200 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.200 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.200 17:17:54 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.200 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.200 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.200 17:17:54 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:46.200 17:17:54 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:46.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.428 Initializing NVMe Controllers 00:07:58.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.428 Initialization complete. Launching workers. 00:07:58.428 ======================================================== 00:07:58.428 Latency(us) 00:07:58.428 Device Information : IOPS MiB/s Average min max 00:07:58.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18943.92 74.00 3378.23 658.38 42194.59 00:07:58.428 ======================================================== 00:07:58.428 Total : 18943.92 74.00 3378.23 658.38 42194.59 00:07:58.428 00:07:58.428 17:18:04 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:58.428 17:18:04 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:58.428 17:18:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:58.428 17:18:04 -- nvmf/common.sh@116 -- # sync 00:07:58.428 17:18:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:58.428 17:18:04 -- nvmf/common.sh@119 -- # set +e 00:07:58.428 17:18:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:58.428 17:18:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:58.428 rmmod nvme_tcp 00:07:58.428 rmmod nvme_fabrics 00:07:58.428 rmmod nvme_keyring 00:07:58.428 17:18:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:58.428 17:18:04 -- nvmf/common.sh@123 -- # set -e 00:07:58.428 17:18:04 -- nvmf/common.sh@124 -- # return 0 00:07:58.428 17:18:04 -- nvmf/common.sh@477 -- # '[' -n 3001711 ']' 00:07:58.428 17:18:04 -- nvmf/common.sh@478 -- # killprocess 3001711 00:07:58.428 17:18:04 -- common/autotest_common.sh@926 -- # '[' -z 3001711 ']' 00:07:58.428 17:18:04 -- common/autotest_common.sh@930 -- # kill -0 3001711 00:07:58.428 17:18:04 -- common/autotest_common.sh@931 -- # uname 00:07:58.428 17:18:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.428 17:18:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3001711 00:07:58.428 17:18:04 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:58.428 17:18:04 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:58.428 17:18:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3001711' 00:07:58.428 killing process with pid 3001711 00:07:58.428 17:18:04 -- common/autotest_common.sh@945 -- # kill 3001711 00:07:58.428 17:18:04 -- common/autotest_common.sh@950 -- # wait 3001711 00:07:58.428 nvmf threads initialize successfully 00:07:58.428 bdev subsystem init successfully 00:07:58.428 created a nvmf target service 00:07:58.428 create targets's poll groups done 00:07:58.428 all subsystems of target started 00:07:58.428 nvmf target is running 00:07:58.428 all subsystems of target stopped 00:07:58.428 destroy targets's poll groups done 00:07:58.428 destroyed the nvmf target service 00:07:58.428 bdev subsystem finish successfully 00:07:58.428 nvmf threads destroy successfully 00:07:58.428 17:18:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:58.428 17:18:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:58.428 17:18:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:58.428 17:18:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.428 17:18:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:58.428 17:18:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.428 17:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.428 17:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.688 17:18:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:58.688 17:18:07 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:58.688 17:18:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:58.688 17:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 00:07:58.688 real 0m20.768s 00:07:58.688 user 0m46.443s 00:07:58.688 sys 0m6.569s 00:07:58.688 17:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.688 17:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 ************************************ 00:07:58.688 END TEST nvmf_example 00:07:58.688 ************************************ 00:07:58.959 17:18:07 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:58.959 17:18:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.959 17:18:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.959 17:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.959 ************************************ 00:07:58.959 START TEST nvmf_filesystem 00:07:58.959 ************************************ 00:07:58.959 17:18:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:58.959 * Looking for test storage... 00:07:58.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.959 17:18:07 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:58.959 17:18:07 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:58.959 17:18:07 -- common/autotest_common.sh@34 -- # set -e 00:07:58.959 17:18:07 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:58.959 17:18:07 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:58.959 17:18:07 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:58.959 17:18:07 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:58.959 17:18:07 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:58.959 17:18:07 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:58.959 17:18:07 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:58.959 17:18:07 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:58.959 17:18:07 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:58.959 17:18:07 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:58.959 17:18:07 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:58.959 17:18:07 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:58.959 17:18:07 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:58.959 17:18:07 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:58.959 17:18:07 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:58.959 17:18:07 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:58.959 17:18:07 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:58.959 17:18:07 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:58.959 17:18:07 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:58.959 17:18:07 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:58.959 17:18:07 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:58.959 17:18:07 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:58.959 17:18:07 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:58.959 17:18:07 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:58.959 17:18:07 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:58.959 17:18:07 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:58.959 17:18:07 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:58.959 17:18:07 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:58.959 17:18:07 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:58.959 17:18:07 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:58.959 17:18:07 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:58.959 17:18:07 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:58.959 17:18:07 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:58.959 17:18:07 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:58.959 17:18:07 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:58.959 17:18:07 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:58.959 17:18:07 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:58.959 17:18:07 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:58.959 17:18:07 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:58.959 17:18:07 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:58.959 17:18:07 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:58.959 17:18:07 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:58.959 17:18:07 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:58.959 17:18:07 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:58.959 17:18:07 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:58.959 17:18:07 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:58.959 17:18:07 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:58.959 17:18:07 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:58.959 17:18:07 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:58.959 17:18:07 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:58.959 17:18:07 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:58.959 17:18:07 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:58.959 17:18:07 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:58.959 17:18:07 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:58.959 17:18:07 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:58.959 17:18:07 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:58.959 17:18:07 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:58.959 17:18:07 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:58.959 17:18:07 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:58.959 17:18:07 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:58.959 17:18:07 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:58.959 17:18:07 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:58.959 17:18:07 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:58.959 17:18:07 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:58.959 17:18:07 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:58.959 17:18:07 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:58.959 17:18:07 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:58.959 17:18:07 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:58.959 17:18:07 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:58.959 17:18:07 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:58.959 17:18:07 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:58.959 17:18:07 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:58.959 17:18:07 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:58.959 17:18:07 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:58.959 17:18:07 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:58.959 17:18:07 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:58.959 17:18:07 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:58.959 17:18:07 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:58.959 17:18:07 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:58.959 17:18:07 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:58.959 17:18:07 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:58.959 17:18:07 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:58.959 17:18:07 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:58.959 17:18:07 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:58.959 17:18:07 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:58.959 17:18:07 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:58.959 17:18:07 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:58.959 17:18:07 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:58.959 17:18:07 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:58.959 17:18:07 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:58.959 17:18:07 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:58.959 17:18:07 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:58.959 17:18:07 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:58.959 17:18:07 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:58.959 17:18:07 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:58.959 17:18:07 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:58.959 17:18:07 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:58.959 17:18:07 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:58.959 17:18:07 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:58.959 #define SPDK_CONFIG_H 00:07:58.959 #define SPDK_CONFIG_APPS 1 00:07:58.959 #define SPDK_CONFIG_ARCH native 00:07:58.959 #undef SPDK_CONFIG_ASAN 00:07:58.959 #undef SPDK_CONFIG_AVAHI 00:07:58.959 #undef SPDK_CONFIG_CET 00:07:58.959 #define SPDK_CONFIG_COVERAGE 1 00:07:58.959 #define SPDK_CONFIG_CROSS_PREFIX 00:07:58.959 #undef SPDK_CONFIG_CRYPTO 00:07:58.959 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:58.959 #undef SPDK_CONFIG_CUSTOMOCF 00:07:58.959 #undef SPDK_CONFIG_DAOS 00:07:58.959 #define SPDK_CONFIG_DAOS_DIR 00:07:58.959 #define SPDK_CONFIG_DEBUG 1 00:07:58.959 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:58.959 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:58.959 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:58.959 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:58.959 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:58.959 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:58.959 #define SPDK_CONFIG_EXAMPLES 1 00:07:58.959 #undef SPDK_CONFIG_FC 00:07:58.959 #define SPDK_CONFIG_FC_PATH 00:07:58.959 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:58.959 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:58.959 #undef SPDK_CONFIG_FUSE 00:07:58.959 #undef SPDK_CONFIG_FUZZER 00:07:58.959 #define SPDK_CONFIG_FUZZER_LIB 00:07:58.959 #undef SPDK_CONFIG_GOLANG 00:07:58.959 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:58.959 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:58.959 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:58.959 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:58.959 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:58.959 #define SPDK_CONFIG_IDXD 1 00:07:58.959 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:58.959 #undef SPDK_CONFIG_IPSEC_MB 00:07:58.959 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:58.959 #define SPDK_CONFIG_ISAL 1 00:07:58.959 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:58.959 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:58.959 #define SPDK_CONFIG_LIBDIR 00:07:58.959 #undef SPDK_CONFIG_LTO 00:07:58.959 #define SPDK_CONFIG_MAX_LCORES 00:07:58.959 #define SPDK_CONFIG_NVME_CUSE 1 00:07:58.959 #undef SPDK_CONFIG_OCF 00:07:58.959 #define SPDK_CONFIG_OCF_PATH 00:07:58.959 #define SPDK_CONFIG_OPENSSL_PATH 00:07:58.959 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:58.959 #undef SPDK_CONFIG_PGO_USE 00:07:58.959 #define SPDK_CONFIG_PREFIX /usr/local 00:07:58.959 #undef SPDK_CONFIG_RAID5F 00:07:58.960 #undef SPDK_CONFIG_RBD 00:07:58.960 #define SPDK_CONFIG_RDMA 1 00:07:58.960 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:58.960 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:58.960 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:58.960 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:58.960 #define SPDK_CONFIG_SHARED 1 00:07:58.960 #undef SPDK_CONFIG_SMA 00:07:58.960 #define SPDK_CONFIG_TESTS 1 00:07:58.960 #undef SPDK_CONFIG_TSAN 00:07:58.960 #define SPDK_CONFIG_UBLK 1 00:07:58.960 #define SPDK_CONFIG_UBSAN 1 00:07:58.960 #undef SPDK_CONFIG_UNIT_TESTS 00:07:58.960 #undef SPDK_CONFIG_URING 00:07:58.960 #define SPDK_CONFIG_URING_PATH 00:07:58.960 #undef SPDK_CONFIG_URING_ZNS 00:07:58.960 #undef SPDK_CONFIG_USDT 00:07:58.960 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:58.960 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:58.960 #define SPDK_CONFIG_VFIO_USER 1 00:07:58.960 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:58.960 #define SPDK_CONFIG_VHOST 1 00:07:58.960 #define SPDK_CONFIG_VIRTIO 1 00:07:58.960 #undef SPDK_CONFIG_VTUNE 00:07:58.960 #define SPDK_CONFIG_VTUNE_DIR 00:07:58.960 #define SPDK_CONFIG_WERROR 1 00:07:58.960 #define SPDK_CONFIG_WPDK_DIR 00:07:58.960 #undef SPDK_CONFIG_XNVME 00:07:58.960 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:58.960 17:18:07 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:58.960 17:18:07 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.960 17:18:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.960 17:18:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.960 17:18:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.960 17:18:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.960 17:18:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.960 17:18:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.960 17:18:07 -- paths/export.sh@5 -- # export PATH 00:07:58.960 17:18:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.960 17:18:07 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:58.960 17:18:07 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:58.960 17:18:07 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:58.960 17:18:07 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:58.960 17:18:07 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:58.960 17:18:07 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:58.960 17:18:07 -- pm/common@16 -- # TEST_TAG=N/A 00:07:58.960 17:18:07 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:58.960 17:18:07 -- common/autotest_common.sh@52 -- # : 1 00:07:58.960 17:18:07 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:58.960 17:18:07 -- common/autotest_common.sh@56 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:58.960 17:18:07 -- common/autotest_common.sh@58 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:58.960 17:18:07 -- common/autotest_common.sh@60 -- # : 1 00:07:58.960 17:18:07 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:58.960 17:18:07 -- common/autotest_common.sh@62 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:58.960 17:18:07 -- common/autotest_common.sh@64 -- # : 00:07:58.960 17:18:07 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:58.960 17:18:07 -- common/autotest_common.sh@66 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:58.960 17:18:07 -- common/autotest_common.sh@68 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:58.960 17:18:07 -- common/autotest_common.sh@70 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:58.960 17:18:07 -- common/autotest_common.sh@72 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:58.960 17:18:07 -- common/autotest_common.sh@74 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:58.960 17:18:07 -- common/autotest_common.sh@76 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:58.960 17:18:07 -- common/autotest_common.sh@78 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:58.960 17:18:07 -- common/autotest_common.sh@80 -- # : 1 00:07:58.960 17:18:07 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:58.960 17:18:07 -- common/autotest_common.sh@82 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:58.960 17:18:07 -- common/autotest_common.sh@84 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:58.960 17:18:07 -- common/autotest_common.sh@86 -- # : 1 00:07:58.960 17:18:07 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:58.960 17:18:07 -- common/autotest_common.sh@88 -- # : 1 00:07:58.960 17:18:07 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:58.960 17:18:07 -- common/autotest_common.sh@90 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:58.960 17:18:07 -- common/autotest_common.sh@92 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:58.960 17:18:07 -- common/autotest_common.sh@94 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:58.960 17:18:07 -- common/autotest_common.sh@96 -- # : tcp 00:07:58.960 17:18:07 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:58.960 17:18:07 -- common/autotest_common.sh@98 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:58.960 17:18:07 -- common/autotest_common.sh@100 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:58.960 17:18:07 -- common/autotest_common.sh@102 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:58.960 17:18:07 -- common/autotest_common.sh@104 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:58.960 17:18:07 -- common/autotest_common.sh@106 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:58.960 17:18:07 -- common/autotest_common.sh@108 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:58.960 17:18:07 -- common/autotest_common.sh@110 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:58.960 17:18:07 -- common/autotest_common.sh@112 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:58.960 17:18:07 -- common/autotest_common.sh@114 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:58.960 17:18:07 -- common/autotest_common.sh@116 -- # : 1 00:07:58.960 17:18:07 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:58.960 17:18:07 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:58.960 17:18:07 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:58.960 17:18:07 -- common/autotest_common.sh@120 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:58.960 17:18:07 -- common/autotest_common.sh@122 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:58.960 17:18:07 -- common/autotest_common.sh@124 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:58.960 17:18:07 -- common/autotest_common.sh@126 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:58.960 17:18:07 -- common/autotest_common.sh@128 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:58.960 17:18:07 -- common/autotest_common.sh@130 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:58.960 17:18:07 -- common/autotest_common.sh@132 -- # : v23.11 00:07:58.960 17:18:07 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:58.960 17:18:07 -- common/autotest_common.sh@134 -- # : true 00:07:58.960 17:18:07 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:58.960 17:18:07 -- common/autotest_common.sh@136 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:58.960 17:18:07 -- common/autotest_common.sh@138 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:58.960 17:18:07 -- common/autotest_common.sh@140 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:58.960 17:18:07 -- common/autotest_common.sh@142 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:58.960 17:18:07 -- common/autotest_common.sh@144 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:58.960 17:18:07 -- common/autotest_common.sh@146 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:58.960 17:18:07 -- common/autotest_common.sh@148 -- # : e810 00:07:58.960 17:18:07 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:58.960 17:18:07 -- common/autotest_common.sh@150 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:58.960 17:18:07 -- common/autotest_common.sh@152 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:58.960 17:18:07 -- common/autotest_common.sh@154 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:58.960 17:18:07 -- common/autotest_common.sh@156 -- # : 0 00:07:58.960 17:18:07 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:58.960 17:18:07 -- common/autotest_common.sh@158 -- # : 0 00:07:58.961 17:18:07 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:58.961 17:18:07 -- common/autotest_common.sh@160 -- # : 0 00:07:58.961 17:18:07 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:58.961 17:18:07 -- common/autotest_common.sh@163 -- # : 00:07:58.961 17:18:07 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:58.961 17:18:07 -- common/autotest_common.sh@165 -- # : 0 00:07:58.961 17:18:07 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:58.961 17:18:07 -- common/autotest_common.sh@167 -- # : 0 00:07:58.961 17:18:07 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:58.961 17:18:07 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.961 17:18:07 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:58.961 17:18:07 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:58.961 17:18:07 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:58.961 17:18:07 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:58.961 17:18:07 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:58.961 17:18:07 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:58.961 17:18:07 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:58.961 17:18:07 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:58.961 17:18:07 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:58.961 17:18:07 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:58.961 17:18:07 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:58.961 17:18:07 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:58.961 17:18:07 -- common/autotest_common.sh@196 -- # cat 00:07:58.961 17:18:07 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:58.961 17:18:07 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:58.961 17:18:07 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:58.961 17:18:07 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:58.961 17:18:07 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:58.961 17:18:07 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:58.961 17:18:07 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:58.961 17:18:07 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:58.961 17:18:07 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:58.961 17:18:07 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:58.961 17:18:07 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:58.961 17:18:07 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:58.961 17:18:07 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:58.961 17:18:07 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:58.961 17:18:07 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:58.961 17:18:07 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:58.961 17:18:07 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:58.961 17:18:07 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:58.961 17:18:07 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:58.961 17:18:07 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:58.961 17:18:07 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:58.961 17:18:07 -- common/autotest_common.sh@249 -- # valgrind= 00:07:58.961 17:18:07 -- common/autotest_common.sh@255 -- # uname -s 00:07:58.961 17:18:07 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:58.961 17:18:07 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:58.961 17:18:07 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:58.961 17:18:07 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:58.961 17:18:07 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:58.961 17:18:07 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:58.961 17:18:07 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:58.961 17:18:07 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:58.961 17:18:07 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:58.961 17:18:07 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:58.961 17:18:07 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:58.961 17:18:07 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:58.961 17:18:07 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:58.961 17:18:07 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:58.961 17:18:07 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:58.961 17:18:07 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:58.961 17:18:07 -- common/autotest_common.sh@309 -- # [[ -z 3004636 ]] 00:07:58.961 17:18:07 -- common/autotest_common.sh@309 -- # kill -0 3004636 00:07:58.961 17:18:07 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:58.961 17:18:07 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:58.961 17:18:07 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:58.961 17:18:07 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:58.961 17:18:07 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:58.961 17:18:07 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:58.961 17:18:07 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:58.961 17:18:07 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:58.961 17:18:07 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.clpSmL 00:07:58.961 17:18:07 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:58.961 17:18:07 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:58.961 17:18:07 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:58.961 17:18:07 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.clpSmL/tests/target /tmp/spdk.clpSmL 00:07:58.961 17:18:07 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:58.961 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:58.961 17:18:07 -- common/autotest_common.sh@318 -- # df -T 00:07:58.961 17:18:07 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:58.961 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=156295168 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=5128134656 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=121157947392 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129356537856 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=8198590464 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=64677011456 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64678268928 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=25861541888 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25871310848 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=9768960 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=175104 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=328704 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=64677781504 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64678268928 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=487424 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=12935639040 00:07:59.319 17:18:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12935651328 00:07:59.319 17:18:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:07:59.319 17:18:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:59.319 17:18:07 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:59.319 * Looking for test storage... 00:07:59.319 17:18:07 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:59.319 17:18:07 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:59.319 17:18:07 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.319 17:18:07 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:59.319 17:18:07 -- common/autotest_common.sh@363 -- # mount=/ 00:07:59.319 17:18:07 -- common/autotest_common.sh@365 -- # target_space=121157947392 00:07:59.319 17:18:07 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:59.319 17:18:07 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:59.319 17:18:07 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:59.319 17:18:07 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:59.319 17:18:07 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:59.319 17:18:07 -- common/autotest_common.sh@372 -- # new_size=10413182976 00:07:59.319 17:18:07 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:59.319 17:18:07 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.319 17:18:07 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.319 17:18:07 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.319 17:18:07 -- common/autotest_common.sh@380 -- # return 0 00:07:59.319 17:18:07 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:59.319 17:18:07 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:59.319 17:18:07 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:59.319 17:18:07 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:59.319 17:18:07 -- common/autotest_common.sh@1672 -- # true 00:07:59.319 17:18:07 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:59.319 17:18:07 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:59.319 17:18:07 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:59.319 17:18:07 -- common/autotest_common.sh@27 -- # exec 00:07:59.319 17:18:07 -- common/autotest_common.sh@29 -- # exec 00:07:59.319 17:18:07 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:59.319 17:18:07 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:59.319 17:18:07 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:59.319 17:18:07 -- common/autotest_common.sh@18 -- # set -x 00:07:59.319 17:18:07 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.319 17:18:07 -- nvmf/common.sh@7 -- # uname -s 00:07:59.319 17:18:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.319 17:18:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.319 17:18:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.319 17:18:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.319 17:18:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.319 17:18:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.319 17:18:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.319 17:18:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.319 17:18:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.319 17:18:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.319 17:18:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.319 17:18:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.319 17:18:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.319 17:18:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.319 17:18:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.319 17:18:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.319 17:18:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.319 17:18:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.319 17:18:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.319 17:18:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.319 17:18:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.319 17:18:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.319 17:18:07 -- paths/export.sh@5 -- # export PATH 00:07:59.319 17:18:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.319 17:18:07 -- nvmf/common.sh@46 -- # : 0 00:07:59.319 17:18:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:59.319 17:18:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:59.319 17:18:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:59.319 17:18:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.319 17:18:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.319 17:18:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:59.319 17:18:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:59.320 17:18:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:59.320 17:18:07 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:59.320 17:18:07 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:59.320 17:18:07 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:59.320 17:18:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:59.320 17:18:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.320 17:18:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:59.320 17:18:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:59.320 17:18:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:59.320 17:18:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.320 17:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.320 17:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.320 17:18:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:59.320 17:18:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:59.320 17:18:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:59.320 17:18:07 -- common/autotest_common.sh@10 -- # set +x 00:08:07.471 17:18:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:07.471 17:18:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:07.471 17:18:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:07.471 17:18:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:07.471 17:18:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:07.471 17:18:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:07.471 17:18:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:07.471 17:18:14 -- nvmf/common.sh@294 -- # net_devs=() 00:08:07.471 17:18:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:07.471 17:18:14 -- nvmf/common.sh@295 -- # e810=() 00:08:07.471 17:18:14 -- nvmf/common.sh@295 -- # local -ga e810 00:08:07.471 17:18:14 -- nvmf/common.sh@296 -- # x722=() 00:08:07.471 17:18:14 -- nvmf/common.sh@296 -- # local -ga x722 00:08:07.471 17:18:14 -- nvmf/common.sh@297 -- # mlx=() 00:08:07.471 17:18:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:07.471 17:18:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.471 17:18:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:07.471 17:18:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:07.471 17:18:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:07.471 17:18:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.471 17:18:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.471 17:18:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.471 17:18:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.471 17:18:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:07.471 17:18:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:07.471 17:18:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.471 17:18:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.471 17:18:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.471 17:18:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.471 17:18:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.471 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.471 17:18:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.471 17:18:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.472 17:18:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.472 17:18:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.472 17:18:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.472 17:18:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.472 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.472 17:18:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.472 17:18:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:07.472 17:18:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:07.472 17:18:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:07.472 17:18:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:07.472 17:18:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:07.472 17:18:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.472 17:18:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.472 17:18:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.472 17:18:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:07.472 17:18:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.472 17:18:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.472 17:18:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:07.472 17:18:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.472 17:18:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.472 17:18:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:07.472 17:18:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:07.472 17:18:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.472 17:18:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.472 17:18:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.472 17:18:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.472 17:18:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:07.472 17:18:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.472 17:18:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.472 17:18:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.472 17:18:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:07.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:08:07.472 00:08:07.472 --- 10.0.0.2 ping statistics --- 00:08:07.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.472 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:08:07.472 17:18:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:08:07.472 00:08:07.472 --- 10.0.0.1 ping statistics --- 00:08:07.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.472 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:08:07.472 17:18:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.472 17:18:15 -- nvmf/common.sh@410 -- # return 0 00:08:07.472 17:18:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.472 17:18:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.472 17:18:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.472 17:18:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.472 17:18:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.472 17:18:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.472 17:18:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.472 17:18:15 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:07.472 17:18:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:07.472 17:18:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.472 17:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 ************************************ 00:08:07.472 START TEST nvmf_filesystem_no_in_capsule 00:08:07.472 ************************************ 00:08:07.472 17:18:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:07.472 17:18:15 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:07.472 17:18:15 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:07.472 17:18:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.472 17:18:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:07.472 17:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 17:18:15 -- nvmf/common.sh@469 -- # nvmfpid=3009101 00:08:07.472 17:18:15 -- nvmf/common.sh@470 -- # waitforlisten 3009101 00:08:07.472 17:18:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.472 17:18:15 -- common/autotest_common.sh@819 -- # '[' -z 3009101 ']' 00:08:07.472 17:18:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.472 17:18:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:07.472 17:18:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.472 17:18:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:07.472 17:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 [2024-10-13 17:18:15.209041] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:07.472 [2024-10-13 17:18:15.209113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.472 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.472 [2024-10-13 17:18:15.282156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.472 [2024-10-13 17:18:15.320906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.472 [2024-10-13 17:18:15.321049] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.472 [2024-10-13 17:18:15.321060] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.472 [2024-10-13 17:18:15.321078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.472 [2024-10-13 17:18:15.321159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.472 [2024-10-13 17:18:15.321262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.472 [2024-10-13 17:18:15.321400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.472 [2024-10-13 17:18:15.321401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.733 17:18:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:07.733 17:18:16 -- common/autotest_common.sh@852 -- # return 0 00:08:07.733 17:18:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:07.733 17:18:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 17:18:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.733 17:18:16 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:07.733 17:18:16 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:07.733 17:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 [2024-10-13 17:18:16.055434] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.733 17:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.733 17:18:16 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:07.733 17:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 Malloc1 00:08:07.733 17:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.733 17:18:16 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.733 17:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 17:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.733 17:18:16 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.733 17:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 17:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.733 17:18:16 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.733 17:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 [2024-10-13 17:18:16.182294] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.733 17:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.733 17:18:16 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:07.733 17:18:16 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:07.733 17:18:16 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:07.733 17:18:16 -- common/autotest_common.sh@1359 -- # local bs 00:08:07.733 17:18:16 -- common/autotest_common.sh@1360 -- # local nb 00:08:07.733 17:18:16 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:07.733 17:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.733 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.733 17:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.733 17:18:16 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:07.733 { 00:08:07.733 "name": "Malloc1", 00:08:07.733 "aliases": [ 00:08:07.733 "e0f57d38-ef0f-49b7-a9ef-f5f3bd3679ce" 00:08:07.733 ], 00:08:07.733 "product_name": "Malloc disk", 00:08:07.733 "block_size": 512, 00:08:07.733 "num_blocks": 1048576, 00:08:07.733 "uuid": "e0f57d38-ef0f-49b7-a9ef-f5f3bd3679ce", 00:08:07.733 "assigned_rate_limits": { 00:08:07.733 "rw_ios_per_sec": 0, 00:08:07.733 "rw_mbytes_per_sec": 0, 00:08:07.733 "r_mbytes_per_sec": 0, 00:08:07.733 "w_mbytes_per_sec": 0 00:08:07.733 }, 00:08:07.733 "claimed": true, 00:08:07.733 "claim_type": "exclusive_write", 00:08:07.733 "zoned": false, 00:08:07.733 "supported_io_types": { 00:08:07.733 "read": true, 00:08:07.733 "write": true, 00:08:07.733 "unmap": true, 00:08:07.733 "write_zeroes": true, 00:08:07.733 "flush": true, 00:08:07.733 "reset": true, 00:08:07.733 "compare": false, 00:08:07.733 "compare_and_write": false, 00:08:07.733 "abort": true, 00:08:07.733 "nvme_admin": false, 00:08:07.733 "nvme_io": false 00:08:07.733 }, 00:08:07.733 "memory_domains": [ 00:08:07.733 { 00:08:07.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.733 "dma_device_type": 2 00:08:07.733 } 00:08:07.733 ], 00:08:07.733 "driver_specific": {} 00:08:07.733 } 00:08:07.733 ]' 00:08:07.733 17:18:16 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:07.995 17:18:16 -- common/autotest_common.sh@1362 -- # bs=512 00:08:07.995 17:18:16 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:07.995 17:18:16 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:07.995 17:18:16 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:07.995 17:18:16 -- common/autotest_common.sh@1367 -- # echo 512 00:08:07.995 17:18:16 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:07.995 17:18:16 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.382 17:18:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.382 17:18:17 -- common/autotest_common.sh@1177 -- # local i=0 00:08:09.382 17:18:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.382 17:18:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:09.382 17:18:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:11.294 17:18:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:11.294 17:18:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:11.295 17:18:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:11.556 17:18:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:11.556 17:18:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:11.556 17:18:19 -- common/autotest_common.sh@1187 -- # return 0 00:08:11.556 17:18:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:11.556 17:18:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:11.556 17:18:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:11.556 17:18:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:11.556 17:18:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:11.556 17:18:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:11.556 17:18:19 -- setup/common.sh@80 -- # echo 536870912 00:08:11.556 17:18:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:11.556 17:18:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:11.556 17:18:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:11.556 17:18:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:11.556 17:18:19 -- target/filesystem.sh@69 -- # partprobe 00:08:12.128 17:18:20 -- target/filesystem.sh@70 -- # sleep 1 00:08:13.070 17:18:21 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:13.070 17:18:21 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:13.070 17:18:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:13.070 17:18:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.070 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:13.070 ************************************ 00:08:13.070 START TEST filesystem_ext4 00:08:13.070 ************************************ 00:08:13.070 17:18:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:13.070 17:18:21 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:13.070 17:18:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.070 17:18:21 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:13.070 17:18:21 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:13.070 17:18:21 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:13.070 17:18:21 -- common/autotest_common.sh@904 -- # local i=0 00:08:13.070 17:18:21 -- common/autotest_common.sh@905 -- # local force 00:08:13.070 17:18:21 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:13.070 17:18:21 -- common/autotest_common.sh@908 -- # force=-F 00:08:13.070 17:18:21 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:13.070 mke2fs 1.47.0 (5-Feb-2023) 00:08:13.070 Discarding device blocks: 0/522240 done 00:08:13.070 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:13.070 Filesystem UUID: 24234245-b05d-49f2-bc6a-29fa247df76e 00:08:13.070 Superblock backups stored on blocks: 00:08:13.070 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:13.070 00:08:13.070 Allocating group tables: 0/64 done 00:08:13.070 Writing inode tables: 0/64 done 00:08:16.372 Creating journal (8192 blocks): done 00:08:18.147 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:08:18.147 00:08:18.147 17:18:26 -- common/autotest_common.sh@921 -- # return 0 00:08:18.147 17:18:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.728 17:18:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.728 17:18:32 -- target/filesystem.sh@25 -- # sync 00:08:24.728 17:18:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.728 17:18:32 -- target/filesystem.sh@27 -- # sync 00:08:24.728 17:18:32 -- target/filesystem.sh@29 -- # i=0 00:08:24.728 17:18:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.728 17:18:32 -- target/filesystem.sh@37 -- # kill -0 3009101 00:08:24.728 17:18:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.728 17:18:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.728 17:18:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.728 17:18:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.728 00:08:24.728 real 0m11.148s 00:08:24.728 user 0m0.033s 00:08:24.728 sys 0m0.078s 00:08:24.728 17:18:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.728 17:18:32 -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 ************************************ 00:08:24.728 END TEST filesystem_ext4 00:08:24.728 ************************************ 00:08:24.728 17:18:32 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:24.728 17:18:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.728 17:18:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.728 17:18:32 -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 ************************************ 00:08:24.728 START TEST filesystem_btrfs 00:08:24.728 ************************************ 00:08:24.728 17:18:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:24.728 17:18:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:24.728 17:18:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.728 17:18:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:24.728 17:18:32 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:24.728 17:18:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.728 17:18:32 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.728 17:18:32 -- common/autotest_common.sh@905 -- # local force 00:08:24.728 17:18:32 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:24.728 17:18:32 -- common/autotest_common.sh@910 -- # force=-f 00:08:24.728 17:18:32 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:24.728 btrfs-progs v6.8.1 00:08:24.728 See https://btrfs.readthedocs.io for more information. 00:08:24.728 00:08:24.728 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:24.728 NOTE: several default settings have changed in version 5.15, please make sure 00:08:24.728 this does not affect your deployments: 00:08:24.728 - DUP for metadata (-m dup) 00:08:24.728 - enabled no-holes (-O no-holes) 00:08:24.728 - enabled free-space-tree (-R free-space-tree) 00:08:24.728 00:08:24.728 Label: (null) 00:08:24.728 UUID: 569201a3-8fb9-402c-9ecb-2dd7ca1b7116 00:08:24.728 Node size: 16384 00:08:24.728 Sector size: 4096 (CPU page size: 4096) 00:08:24.728 Filesystem size: 510.00MiB 00:08:24.728 Block group profiles: 00:08:24.728 Data: single 8.00MiB 00:08:24.728 Metadata: DUP 32.00MiB 00:08:24.728 System: DUP 8.00MiB 00:08:24.728 SSD detected: yes 00:08:24.728 Zoned device: no 00:08:24.728 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:24.728 Checksum: crc32c 00:08:24.728 Number of devices: 1 00:08:24.728 Devices: 00:08:24.728 ID SIZE PATH 00:08:24.728 1 510.00MiB /dev/nvme0n1p1 00:08:24.728 00:08:24.728 17:18:32 -- common/autotest_common.sh@921 -- # return 0 00:08:24.728 17:18:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.728 17:18:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.728 17:18:33 -- target/filesystem.sh@25 -- # sync 00:08:24.728 17:18:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.728 17:18:33 -- target/filesystem.sh@27 -- # sync 00:08:24.988 17:18:33 -- target/filesystem.sh@29 -- # i=0 00:08:24.989 17:18:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.989 17:18:33 -- target/filesystem.sh@37 -- # kill -0 3009101 00:08:24.989 17:18:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.989 17:18:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.989 17:18:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.989 17:18:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.989 00:08:24.989 real 0m0.721s 00:08:24.989 user 0m0.027s 00:08:24.989 sys 0m0.119s 00:08:24.989 17:18:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.989 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:08:24.989 ************************************ 00:08:24.989 END TEST filesystem_btrfs 00:08:24.989 ************************************ 00:08:24.989 17:18:33 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:24.989 17:18:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.989 17:18:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.989 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:08:24.989 ************************************ 00:08:24.989 START TEST filesystem_xfs 00:08:24.989 ************************************ 00:08:24.989 17:18:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:24.989 17:18:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:24.989 17:18:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.989 17:18:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:24.989 17:18:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:24.989 17:18:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.989 17:18:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.989 17:18:33 -- common/autotest_common.sh@905 -- # local force 00:08:24.989 17:18:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:24.989 17:18:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:24.989 17:18:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:24.989 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:24.989 = sectsz=512 attr=2, projid32bit=1 00:08:24.989 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:24.989 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:24.989 data = bsize=4096 blocks=130560, imaxpct=25 00:08:24.989 = sunit=0 swidth=0 blks 00:08:24.989 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:24.989 log =internal log bsize=4096 blocks=16384, version=2 00:08:24.989 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:24.989 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:25.930 Discarding blocks...Done. 00:08:25.930 17:18:34 -- common/autotest_common.sh@921 -- # return 0 00:08:25.930 17:18:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.472 17:18:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.472 17:18:36 -- target/filesystem.sh@25 -- # sync 00:08:28.472 17:18:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.472 17:18:36 -- target/filesystem.sh@27 -- # sync 00:08:28.472 17:18:36 -- target/filesystem.sh@29 -- # i=0 00:08:28.472 17:18:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.472 17:18:36 -- target/filesystem.sh@37 -- # kill -0 3009101 00:08:28.472 17:18:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.472 17:18:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.472 17:18:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.472 17:18:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.472 00:08:28.472 real 0m3.487s 00:08:28.472 user 0m0.024s 00:08:28.472 sys 0m0.082s 00:08:28.472 17:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.472 17:18:36 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 ************************************ 00:08:28.472 END TEST filesystem_xfs 00:08:28.472 ************************************ 00:08:28.472 17:18:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.472 17:18:36 -- target/filesystem.sh@93 -- # sync 00:08:28.472 17:18:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:28.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.733 17:18:37 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:28.733 17:18:37 -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.733 17:18:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:28.733 17:18:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.733 17:18:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:28.733 17:18:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.733 17:18:37 -- common/autotest_common.sh@1210 -- # return 0 00:08:28.733 17:18:37 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.733 17:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.733 17:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.733 17:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.733 17:18:37 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:28.733 17:18:37 -- target/filesystem.sh@101 -- # killprocess 3009101 00:08:28.733 17:18:37 -- common/autotest_common.sh@926 -- # '[' -z 3009101 ']' 00:08:28.733 17:18:37 -- common/autotest_common.sh@930 -- # kill -0 3009101 00:08:28.733 17:18:37 -- common/autotest_common.sh@931 -- # uname 00:08:28.733 17:18:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.733 17:18:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3009101 00:08:28.733 17:18:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:28.733 17:18:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:28.733 17:18:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3009101' 00:08:28.733 killing process with pid 3009101 00:08:28.733 17:18:37 -- common/autotest_common.sh@945 -- # kill 3009101 00:08:28.733 17:18:37 -- common/autotest_common.sh@950 -- # wait 3009101 00:08:28.994 17:18:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:28.994 00:08:28.994 real 0m22.265s 00:08:28.994 user 1m28.141s 00:08:28.994 sys 0m1.371s 00:08:28.995 17:18:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.995 17:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.995 ************************************ 00:08:28.995 END TEST nvmf_filesystem_no_in_capsule 00:08:28.995 ************************************ 00:08:28.995 17:18:37 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:28.995 17:18:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:28.995 17:18:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.995 17:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.995 ************************************ 00:08:28.995 START TEST nvmf_filesystem_in_capsule 00:08:28.995 ************************************ 00:08:28.995 17:18:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:28.995 17:18:37 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:28.995 17:18:37 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:28.995 17:18:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:28.995 17:18:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:28.995 17:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.995 17:18:37 -- nvmf/common.sh@469 -- # nvmfpid=3013711 00:08:28.995 17:18:37 -- nvmf/common.sh@470 -- # waitforlisten 3013711 00:08:28.995 17:18:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.995 17:18:37 -- common/autotest_common.sh@819 -- # '[' -z 3013711 ']' 00:08:28.995 17:18:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.995 17:18:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.995 17:18:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.995 17:18:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.995 17:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.995 [2024-10-13 17:18:37.519132] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:28.995 [2024-10-13 17:18:37.519189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.256 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.256 [2024-10-13 17:18:37.587726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.256 [2024-10-13 17:18:37.619915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:29.256 [2024-10-13 17:18:37.620052] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.256 [2024-10-13 17:18:37.620074] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.256 [2024-10-13 17:18:37.620089] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.256 [2024-10-13 17:18:37.620165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.256 [2024-10-13 17:18:37.620287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.256 [2024-10-13 17:18:37.620444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.256 [2024-10-13 17:18:37.620445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.827 17:18:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.827 17:18:38 -- common/autotest_common.sh@852 -- # return 0 00:08:29.827 17:18:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:29.827 17:18:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:29.827 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 17:18:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.827 17:18:38 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:29.827 17:18:38 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:29.827 17:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.827 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.088 [2024-10-13 17:18:38.358480] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.088 17:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.088 17:18:38 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:30.088 17:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.088 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.088 Malloc1 00:08:30.088 17:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.088 17:18:38 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.088 17:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.088 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.088 17:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.088 17:18:38 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.088 17:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.088 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.088 17:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.088 17:18:38 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.088 17:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.088 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.088 [2024-10-13 17:18:38.483035] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.088 17:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.088 17:18:38 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:30.088 17:18:38 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:30.088 17:18:38 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:30.088 17:18:38 -- common/autotest_common.sh@1359 -- # local bs 00:08:30.088 17:18:38 -- common/autotest_common.sh@1360 -- # local nb 00:08:30.088 17:18:38 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:30.088 17:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.088 17:18:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.088 17:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.088 17:18:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:30.088 { 00:08:30.088 "name": "Malloc1", 00:08:30.088 "aliases": [ 00:08:30.088 "2b857fb9-12f0-46e3-91cb-056d685be4ba" 00:08:30.088 ], 00:08:30.088 "product_name": "Malloc disk", 00:08:30.088 "block_size": 512, 00:08:30.088 "num_blocks": 1048576, 00:08:30.088 "uuid": "2b857fb9-12f0-46e3-91cb-056d685be4ba", 00:08:30.088 "assigned_rate_limits": { 00:08:30.088 "rw_ios_per_sec": 0, 00:08:30.088 "rw_mbytes_per_sec": 0, 00:08:30.088 "r_mbytes_per_sec": 0, 00:08:30.088 "w_mbytes_per_sec": 0 00:08:30.088 }, 00:08:30.088 "claimed": true, 00:08:30.088 "claim_type": "exclusive_write", 00:08:30.088 "zoned": false, 00:08:30.088 "supported_io_types": { 00:08:30.088 "read": true, 00:08:30.088 "write": true, 00:08:30.088 "unmap": true, 00:08:30.088 "write_zeroes": true, 00:08:30.088 "flush": true, 00:08:30.088 "reset": true, 00:08:30.088 "compare": false, 00:08:30.088 "compare_and_write": false, 00:08:30.088 "abort": true, 00:08:30.088 "nvme_admin": false, 00:08:30.088 "nvme_io": false 00:08:30.088 }, 00:08:30.088 "memory_domains": [ 00:08:30.088 { 00:08:30.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.088 "dma_device_type": 2 00:08:30.088 } 00:08:30.088 ], 00:08:30.088 "driver_specific": {} 00:08:30.088 } 00:08:30.088 ]' 00:08:30.088 17:18:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:30.088 17:18:38 -- common/autotest_common.sh@1362 -- # bs=512 00:08:30.088 17:18:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:30.088 17:18:38 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:30.088 17:18:38 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:30.088 17:18:38 -- common/autotest_common.sh@1367 -- # echo 512 00:08:30.088 17:18:38 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:30.089 17:18:38 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.002 17:18:40 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.002 17:18:40 -- common/autotest_common.sh@1177 -- # local i=0 00:08:32.002 17:18:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.002 17:18:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:32.002 17:18:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:33.915 17:18:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:33.915 17:18:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:33.915 17:18:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:33.915 17:18:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:33.915 17:18:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:33.915 17:18:42 -- common/autotest_common.sh@1187 -- # return 0 00:08:33.915 17:18:42 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:33.915 17:18:42 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:33.915 17:18:42 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:33.915 17:18:42 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:33.915 17:18:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:33.915 17:18:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:33.915 17:18:42 -- setup/common.sh@80 -- # echo 536870912 00:08:33.915 17:18:42 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:33.915 17:18:42 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:33.915 17:18:42 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:33.915 17:18:42 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:34.176 17:18:42 -- target/filesystem.sh@69 -- # partprobe 00:08:35.119 17:18:43 -- target/filesystem.sh@70 -- # sleep 1 00:08:36.064 17:18:44 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:36.064 17:18:44 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:36.064 17:18:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:36.064 17:18:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:36.064 17:18:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.064 ************************************ 00:08:36.064 START TEST filesystem_in_capsule_ext4 00:08:36.064 ************************************ 00:08:36.064 17:18:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:36.064 17:18:44 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:36.064 17:18:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.064 17:18:44 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:36.064 17:18:44 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:36.064 17:18:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:36.064 17:18:44 -- common/autotest_common.sh@904 -- # local i=0 00:08:36.064 17:18:44 -- common/autotest_common.sh@905 -- # local force 00:08:36.064 17:18:44 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:36.064 17:18:44 -- common/autotest_common.sh@908 -- # force=-F 00:08:36.064 17:18:44 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:36.064 mke2fs 1.47.0 (5-Feb-2023) 00:08:36.064 Discarding device blocks: 0/522240 done 00:08:36.064 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:36.064 Filesystem UUID: 9be4fdf9-ab3c-4ec1-abe4-befacf68f50d 00:08:36.064 Superblock backups stored on blocks: 00:08:36.064 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:36.064 00:08:36.064 Allocating group tables: 0/64 done 00:08:36.064 Writing inode tables: 0/64 done 00:08:36.325 Creating journal (8192 blocks): done 00:08:36.325 Writing superblocks and filesystem accounting information: 0/64 done 00:08:36.325 00:08:36.325 17:18:44 -- common/autotest_common.sh@921 -- # return 0 00:08:36.325 17:18:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.908 17:18:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.908 17:18:50 -- target/filesystem.sh@25 -- # sync 00:08:42.908 17:18:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.908 17:18:50 -- target/filesystem.sh@27 -- # sync 00:08:42.908 17:18:50 -- target/filesystem.sh@29 -- # i=0 00:08:42.908 17:18:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.908 17:18:50 -- target/filesystem.sh@37 -- # kill -0 3013711 00:08:42.908 17:18:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.908 17:18:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.908 17:18:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.908 17:18:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.908 00:08:42.908 real 0m6.589s 00:08:42.908 user 0m0.031s 00:08:42.908 sys 0m0.072s 00:08:42.908 17:18:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.908 17:18:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.908 ************************************ 00:08:42.908 END TEST filesystem_in_capsule_ext4 00:08:42.908 ************************************ 00:08:42.908 17:18:51 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:42.908 17:18:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:42.908 17:18:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.908 17:18:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.908 ************************************ 00:08:42.908 START TEST filesystem_in_capsule_btrfs 00:08:42.908 ************************************ 00:08:42.908 17:18:51 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:42.908 17:18:51 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:42.908 17:18:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.908 17:18:51 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:42.908 17:18:51 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:42.908 17:18:51 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:42.908 17:18:51 -- common/autotest_common.sh@904 -- # local i=0 00:08:42.908 17:18:51 -- common/autotest_common.sh@905 -- # local force 00:08:42.908 17:18:51 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:42.908 17:18:51 -- common/autotest_common.sh@910 -- # force=-f 00:08:42.908 17:18:51 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:42.908 btrfs-progs v6.8.1 00:08:42.908 See https://btrfs.readthedocs.io for more information. 00:08:42.908 00:08:42.908 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:42.908 NOTE: several default settings have changed in version 5.15, please make sure 00:08:42.908 this does not affect your deployments: 00:08:42.908 - DUP for metadata (-m dup) 00:08:42.908 - enabled no-holes (-O no-holes) 00:08:42.908 - enabled free-space-tree (-R free-space-tree) 00:08:42.908 00:08:42.908 Label: (null) 00:08:42.908 UUID: 33a384aa-4e8a-4eb6-bda3-f8b39ec3e48c 00:08:42.908 Node size: 16384 00:08:42.908 Sector size: 4096 (CPU page size: 4096) 00:08:42.908 Filesystem size: 510.00MiB 00:08:42.908 Block group profiles: 00:08:42.908 Data: single 8.00MiB 00:08:42.908 Metadata: DUP 32.00MiB 00:08:42.908 System: DUP 8.00MiB 00:08:42.908 SSD detected: yes 00:08:42.908 Zoned device: no 00:08:42.908 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:42.908 Checksum: crc32c 00:08:42.908 Number of devices: 1 00:08:42.908 Devices: 00:08:42.908 ID SIZE PATH 00:08:42.908 1 510.00MiB /dev/nvme0n1p1 00:08:42.908 00:08:42.908 17:18:51 -- common/autotest_common.sh@921 -- # return 0 00:08:42.908 17:18:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:43.851 17:18:52 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:43.851 17:18:52 -- target/filesystem.sh@25 -- # sync 00:08:43.851 17:18:52 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:43.851 17:18:52 -- target/filesystem.sh@27 -- # sync 00:08:43.851 17:18:52 -- target/filesystem.sh@29 -- # i=0 00:08:43.851 17:18:52 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.113 17:18:52 -- target/filesystem.sh@37 -- # kill -0 3013711 00:08:44.113 17:18:52 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.113 17:18:52 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.113 17:18:52 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.113 17:18:52 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.113 00:08:44.113 real 0m1.342s 00:08:44.113 user 0m0.035s 00:08:44.113 sys 0m0.113s 00:08:44.113 17:18:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.113 17:18:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.113 ************************************ 00:08:44.113 END TEST filesystem_in_capsule_btrfs 00:08:44.113 ************************************ 00:08:44.113 17:18:52 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:44.113 17:18:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:44.113 17:18:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.113 17:18:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.113 ************************************ 00:08:44.113 START TEST filesystem_in_capsule_xfs 00:08:44.113 ************************************ 00:08:44.113 17:18:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:44.113 17:18:52 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:44.113 17:18:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:44.113 17:18:52 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:44.113 17:18:52 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:44.113 17:18:52 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:44.113 17:18:52 -- common/autotest_common.sh@904 -- # local i=0 00:08:44.113 17:18:52 -- common/autotest_common.sh@905 -- # local force 00:08:44.113 17:18:52 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:44.113 17:18:52 -- common/autotest_common.sh@910 -- # force=-f 00:08:44.113 17:18:52 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:44.113 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:44.113 = sectsz=512 attr=2, projid32bit=1 00:08:44.113 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:44.113 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:44.113 data = bsize=4096 blocks=130560, imaxpct=25 00:08:44.113 = sunit=0 swidth=0 blks 00:08:44.113 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:44.113 log =internal log bsize=4096 blocks=16384, version=2 00:08:44.113 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:44.113 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:45.056 Discarding blocks...Done. 00:08:45.056 17:18:53 -- common/autotest_common.sh@921 -- # return 0 00:08:45.056 17:18:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:47.680 17:18:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:47.680 17:18:55 -- target/filesystem.sh@25 -- # sync 00:08:47.680 17:18:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:47.680 17:18:55 -- target/filesystem.sh@27 -- # sync 00:08:47.680 17:18:55 -- target/filesystem.sh@29 -- # i=0 00:08:47.680 17:18:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:47.680 17:18:55 -- target/filesystem.sh@37 -- # kill -0 3013711 00:08:47.680 17:18:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:47.680 17:18:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:47.680 17:18:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:47.680 17:18:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:47.680 00:08:47.680 real 0m3.486s 00:08:47.680 user 0m0.028s 00:08:47.680 sys 0m0.078s 00:08:47.680 17:18:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.680 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 ************************************ 00:08:47.680 END TEST filesystem_in_capsule_xfs 00:08:47.680 ************************************ 00:08:47.680 17:18:55 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:47.680 17:18:56 -- target/filesystem.sh@93 -- # sync 00:08:47.680 17:18:56 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.680 17:18:56 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.680 17:18:56 -- common/autotest_common.sh@1198 -- # local i=0 00:08:47.680 17:18:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:47.680 17:18:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.940 17:18:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:47.940 17:18:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.940 17:18:56 -- common/autotest_common.sh@1210 -- # return 0 00:08:47.940 17:18:56 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.940 17:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.940 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 17:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.940 17:18:56 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:47.940 17:18:56 -- target/filesystem.sh@101 -- # killprocess 3013711 00:08:47.940 17:18:56 -- common/autotest_common.sh@926 -- # '[' -z 3013711 ']' 00:08:47.940 17:18:56 -- common/autotest_common.sh@930 -- # kill -0 3013711 00:08:47.940 17:18:56 -- common/autotest_common.sh@931 -- # uname 00:08:47.940 17:18:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.940 17:18:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3013711 00:08:47.940 17:18:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.940 17:18:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.940 17:18:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3013711' 00:08:47.940 killing process with pid 3013711 00:08:47.940 17:18:56 -- common/autotest_common.sh@945 -- # kill 3013711 00:08:47.940 17:18:56 -- common/autotest_common.sh@950 -- # wait 3013711 00:08:48.201 17:18:56 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:48.201 00:08:48.201 real 0m19.059s 00:08:48.201 user 1m15.390s 00:08:48.201 sys 0m1.372s 00:08:48.201 17:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.201 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:48.201 ************************************ 00:08:48.201 END TEST nvmf_filesystem_in_capsule 00:08:48.201 ************************************ 00:08:48.201 17:18:56 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:48.201 17:18:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:48.201 17:18:56 -- nvmf/common.sh@116 -- # sync 00:08:48.201 17:18:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:48.201 17:18:56 -- nvmf/common.sh@119 -- # set +e 00:08:48.201 17:18:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:48.201 17:18:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:48.201 rmmod nvme_tcp 00:08:48.201 rmmod nvme_fabrics 00:08:48.201 rmmod nvme_keyring 00:08:48.201 17:18:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:48.201 17:18:56 -- nvmf/common.sh@123 -- # set -e 00:08:48.201 17:18:56 -- nvmf/common.sh@124 -- # return 0 00:08:48.201 17:18:56 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:48.201 17:18:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.201 17:18:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.201 17:18:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.201 17:18:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.201 17:18:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.201 17:18:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.201 17:18:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.201 17:18:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.752 17:18:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:50.752 00:08:50.752 real 0m51.439s 00:08:50.752 user 2m45.751s 00:08:50.752 sys 0m8.560s 00:08:50.752 17:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.752 17:18:58 -- common/autotest_common.sh@10 -- # set +x 00:08:50.752 ************************************ 00:08:50.752 END TEST nvmf_filesystem 00:08:50.752 ************************************ 00:08:50.752 17:18:58 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:50.752 17:18:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.752 17:18:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.752 17:18:58 -- common/autotest_common.sh@10 -- # set +x 00:08:50.752 ************************************ 00:08:50.752 START TEST nvmf_discovery 00:08:50.752 ************************************ 00:08:50.752 17:18:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:50.752 * Looking for test storage... 00:08:50.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.752 17:18:58 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.752 17:18:58 -- nvmf/common.sh@7 -- # uname -s 00:08:50.752 17:18:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.752 17:18:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.752 17:18:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.752 17:18:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.752 17:18:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.752 17:18:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.752 17:18:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.752 17:18:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.752 17:18:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.752 17:18:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.752 17:18:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:50.752 17:18:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:50.752 17:18:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.752 17:18:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.752 17:18:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.752 17:18:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.752 17:18:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.752 17:18:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.752 17:18:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.752 17:18:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.752 17:18:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.752 17:18:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.752 17:18:58 -- paths/export.sh@5 -- # export PATH 00:08:50.752 17:18:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.752 17:18:58 -- nvmf/common.sh@46 -- # : 0 00:08:50.752 17:18:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.752 17:18:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.752 17:18:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.752 17:18:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.752 17:18:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.752 17:18:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.752 17:18:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.752 17:18:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.752 17:18:58 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:50.752 17:18:58 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:50.752 17:18:58 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:50.752 17:18:58 -- target/discovery.sh@15 -- # hash nvme 00:08:50.752 17:18:58 -- target/discovery.sh@20 -- # nvmftestinit 00:08:50.752 17:18:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.752 17:18:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.752 17:18:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.752 17:18:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.752 17:18:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.752 17:18:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.752 17:18:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.752 17:18:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.752 17:18:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:50.752 17:18:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:50.752 17:18:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:50.752 17:18:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.343 17:19:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.343 17:19:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:57.343 17:19:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:57.343 17:19:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:57.343 17:19:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:57.343 17:19:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:57.343 17:19:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:57.343 17:19:05 -- nvmf/common.sh@294 -- # net_devs=() 00:08:57.343 17:19:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:57.343 17:19:05 -- nvmf/common.sh@295 -- # e810=() 00:08:57.343 17:19:05 -- nvmf/common.sh@295 -- # local -ga e810 00:08:57.343 17:19:05 -- nvmf/common.sh@296 -- # x722=() 00:08:57.343 17:19:05 -- nvmf/common.sh@296 -- # local -ga x722 00:08:57.343 17:19:05 -- nvmf/common.sh@297 -- # mlx=() 00:08:57.343 17:19:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:57.343 17:19:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.343 17:19:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:57.343 17:19:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:57.343 17:19:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:57.343 17:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:57.343 17:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:57.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:57.343 17:19:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:57.343 17:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:57.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:57.343 17:19:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:57.343 17:19:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:57.343 17:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.343 17:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:57.343 17:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.343 17:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:57.343 Found net devices under 0000:31:00.0: cvl_0_0 00:08:57.343 17:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.343 17:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:57.343 17:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.343 17:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:57.343 17:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.343 17:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:57.343 Found net devices under 0000:31:00.1: cvl_0_1 00:08:57.343 17:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.343 17:19:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:57.343 17:19:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:57.343 17:19:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:57.343 17:19:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.343 17:19:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.343 17:19:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.343 17:19:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:57.343 17:19:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.343 17:19:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.343 17:19:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:57.343 17:19:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.343 17:19:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.343 17:19:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:57.343 17:19:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:57.343 17:19:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.343 17:19:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.343 17:19:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.343 17:19:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.343 17:19:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:57.343 17:19:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.343 17:19:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.343 17:19:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.343 17:19:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:57.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:08:57.343 00:08:57.343 --- 10.0.0.2 ping statistics --- 00:08:57.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.343 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:08:57.343 17:19:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:08:57.343 00:08:57.343 --- 10.0.0.1 ping statistics --- 00:08:57.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.343 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:57.343 17:19:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.343 17:19:05 -- nvmf/common.sh@410 -- # return 0 00:08:57.343 17:19:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:57.343 17:19:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.343 17:19:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:57.343 17:19:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.343 17:19:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:57.343 17:19:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:57.343 17:19:05 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:57.343 17:19:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:57.343 17:19:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.343 17:19:05 -- common/autotest_common.sh@10 -- # set +x 00:08:57.343 17:19:05 -- nvmf/common.sh@469 -- # nvmfpid=3021815 00:08:57.343 17:19:05 -- nvmf/common.sh@470 -- # waitforlisten 3021815 00:08:57.343 17:19:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.343 17:19:05 -- common/autotest_common.sh@819 -- # '[' -z 3021815 ']' 00:08:57.343 17:19:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.343 17:19:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.343 17:19:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.343 17:19:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.343 17:19:05 -- common/autotest_common.sh@10 -- # set +x 00:08:57.604 [2024-10-13 17:19:05.914624] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:57.604 [2024-10-13 17:19:05.914703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.604 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.604 [2024-10-13 17:19:05.989883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.604 [2024-10-13 17:19:06.028599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.604 [2024-10-13 17:19:06.028740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.604 [2024-10-13 17:19:06.028751] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.604 [2024-10-13 17:19:06.028759] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.604 [2024-10-13 17:19:06.028903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.604 [2024-10-13 17:19:06.029044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.604 [2024-10-13 17:19:06.029116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.604 [2024-10-13 17:19:06.029116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.547 17:19:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:58.547 17:19:06 -- common/autotest_common.sh@852 -- # return 0 00:08:58.547 17:19:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:58.547 17:19:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.547 17:19:06 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 [2024-10-13 17:19:06.749448] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@26 -- # seq 1 4 00:08:58.547 17:19:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.547 17:19:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 Null1 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 [2024-10-13 17:19:06.809797] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.547 17:19:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 Null2 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.547 17:19:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 Null3 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.547 17:19:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 Null4 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.547 17:19:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:58.547 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.547 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.547 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.548 17:19:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:58.548 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.548 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.548 17:19:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:58.548 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.548 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.548 17:19:06 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.548 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.548 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.548 17:19:06 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:58.548 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.548 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.548 17:19:06 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:58.918 00:08:58.918 Discovery Log Number of Records 6, Generation counter 6 00:08:58.918 =====Discovery Log Entry 0====== 00:08:58.918 trtype: tcp 00:08:58.918 adrfam: ipv4 00:08:58.918 subtype: current discovery subsystem 00:08:58.918 treq: not required 00:08:58.918 portid: 0 00:08:58.918 trsvcid: 4420 00:08:58.918 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:58.918 traddr: 10.0.0.2 00:08:58.918 eflags: explicit discovery connections, duplicate discovery information 00:08:58.918 sectype: none 00:08:58.918 =====Discovery Log Entry 1====== 00:08:58.918 trtype: tcp 00:08:58.918 adrfam: ipv4 00:08:58.918 subtype: nvme subsystem 00:08:58.918 treq: not required 00:08:58.918 portid: 0 00:08:58.918 trsvcid: 4420 00:08:58.918 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:58.918 traddr: 10.0.0.2 00:08:58.918 eflags: none 00:08:58.918 sectype: none 00:08:58.918 =====Discovery Log Entry 2====== 00:08:58.918 trtype: tcp 00:08:58.918 adrfam: ipv4 00:08:58.918 subtype: nvme subsystem 00:08:58.918 treq: not required 00:08:58.918 portid: 0 00:08:58.918 trsvcid: 4420 00:08:58.918 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:58.918 traddr: 10.0.0.2 00:08:58.918 eflags: none 00:08:58.918 sectype: none 00:08:58.918 =====Discovery Log Entry 3====== 00:08:58.918 trtype: tcp 00:08:58.918 adrfam: ipv4 00:08:58.918 subtype: nvme subsystem 00:08:58.918 treq: not required 00:08:58.918 portid: 0 00:08:58.918 trsvcid: 4420 00:08:58.918 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:58.918 traddr: 10.0.0.2 00:08:58.918 eflags: none 00:08:58.918 sectype: none 00:08:58.918 =====Discovery Log Entry 4====== 00:08:58.918 trtype: tcp 00:08:58.918 adrfam: ipv4 00:08:58.918 subtype: nvme subsystem 00:08:58.918 treq: not required 00:08:58.918 portid: 0 00:08:58.918 trsvcid: 4420 00:08:58.918 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:58.918 traddr: 10.0.0.2 00:08:58.918 eflags: none 00:08:58.918 sectype: none 00:08:58.918 =====Discovery Log Entry 5====== 00:08:58.918 trtype: tcp 00:08:58.918 adrfam: ipv4 00:08:58.918 subtype: discovery subsystem referral 00:08:58.918 treq: not required 00:08:58.918 portid: 0 00:08:58.918 trsvcid: 4430 00:08:58.919 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:58.919 traddr: 10.0.0.2 00:08:58.919 eflags: none 00:08:58.919 sectype: none 00:08:58.919 17:19:07 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:58.919 Perform nvmf subsystem discovery via RPC 00:08:58.919 17:19:07 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 [2024-10-13 17:19:07.154824] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:58.919 [ 00:08:58.919 { 00:08:58.919 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:58.919 "subtype": "Discovery", 00:08:58.919 "listen_addresses": [ 00:08:58.919 { 00:08:58.919 "transport": "TCP", 00:08:58.919 "trtype": "TCP", 00:08:58.919 "adrfam": "IPv4", 00:08:58.919 "traddr": "10.0.0.2", 00:08:58.919 "trsvcid": "4420" 00:08:58.919 } 00:08:58.919 ], 00:08:58.919 "allow_any_host": true, 00:08:58.919 "hosts": [] 00:08:58.919 }, 00:08:58.919 { 00:08:58.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.919 "subtype": "NVMe", 00:08:58.919 "listen_addresses": [ 00:08:58.919 { 00:08:58.919 "transport": "TCP", 00:08:58.919 "trtype": "TCP", 00:08:58.919 "adrfam": "IPv4", 00:08:58.919 "traddr": "10.0.0.2", 00:08:58.919 "trsvcid": "4420" 00:08:58.919 } 00:08:58.919 ], 00:08:58.919 "allow_any_host": true, 00:08:58.919 "hosts": [], 00:08:58.919 "serial_number": "SPDK00000000000001", 00:08:58.919 "model_number": "SPDK bdev Controller", 00:08:58.919 "max_namespaces": 32, 00:08:58.919 "min_cntlid": 1, 00:08:58.919 "max_cntlid": 65519, 00:08:58.919 "namespaces": [ 00:08:58.919 { 00:08:58.919 "nsid": 1, 00:08:58.919 "bdev_name": "Null1", 00:08:58.919 "name": "Null1", 00:08:58.919 "nguid": "925D7C585BAA4888BF16E8928C8EDDB1", 00:08:58.919 "uuid": "925d7c58-5baa-4888-bf16-e8928c8eddb1" 00:08:58.919 } 00:08:58.919 ] 00:08:58.919 }, 00:08:58.919 { 00:08:58.919 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:58.919 "subtype": "NVMe", 00:08:58.919 "listen_addresses": [ 00:08:58.919 { 00:08:58.919 "transport": "TCP", 00:08:58.919 "trtype": "TCP", 00:08:58.919 "adrfam": "IPv4", 00:08:58.919 "traddr": "10.0.0.2", 00:08:58.919 "trsvcid": "4420" 00:08:58.919 } 00:08:58.919 ], 00:08:58.919 "allow_any_host": true, 00:08:58.919 "hosts": [], 00:08:58.919 "serial_number": "SPDK00000000000002", 00:08:58.919 "model_number": "SPDK bdev Controller", 00:08:58.919 "max_namespaces": 32, 00:08:58.919 "min_cntlid": 1, 00:08:58.919 "max_cntlid": 65519, 00:08:58.919 "namespaces": [ 00:08:58.919 { 00:08:58.919 "nsid": 1, 00:08:58.919 "bdev_name": "Null2", 00:08:58.919 "name": "Null2", 00:08:58.919 "nguid": "ACA50C87B8654A7AA13D59612102F4A5", 00:08:58.919 "uuid": "aca50c87-b865-4a7a-a13d-59612102f4a5" 00:08:58.919 } 00:08:58.919 ] 00:08:58.919 }, 00:08:58.919 { 00:08:58.919 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:58.919 "subtype": "NVMe", 00:08:58.919 "listen_addresses": [ 00:08:58.919 { 00:08:58.919 "transport": "TCP", 00:08:58.919 "trtype": "TCP", 00:08:58.919 "adrfam": "IPv4", 00:08:58.919 "traddr": "10.0.0.2", 00:08:58.919 "trsvcid": "4420" 00:08:58.919 } 00:08:58.919 ], 00:08:58.919 "allow_any_host": true, 00:08:58.919 "hosts": [], 00:08:58.919 "serial_number": "SPDK00000000000003", 00:08:58.919 "model_number": "SPDK bdev Controller", 00:08:58.919 "max_namespaces": 32, 00:08:58.919 "min_cntlid": 1, 00:08:58.919 "max_cntlid": 65519, 00:08:58.919 "namespaces": [ 00:08:58.919 { 00:08:58.919 "nsid": 1, 00:08:58.919 "bdev_name": "Null3", 00:08:58.919 "name": "Null3", 00:08:58.919 "nguid": "2BC2FDB1822F4EDDA657F0EA538E42A7", 00:08:58.919 "uuid": "2bc2fdb1-822f-4edd-a657-f0ea538e42a7" 00:08:58.919 } 00:08:58.919 ] 00:08:58.919 }, 00:08:58.919 { 00:08:58.919 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:58.919 "subtype": "NVMe", 00:08:58.919 "listen_addresses": [ 00:08:58.919 { 00:08:58.919 "transport": "TCP", 00:08:58.919 "trtype": "TCP", 00:08:58.919 "adrfam": "IPv4", 00:08:58.919 "traddr": "10.0.0.2", 00:08:58.919 "trsvcid": "4420" 00:08:58.919 } 00:08:58.919 ], 00:08:58.919 "allow_any_host": true, 00:08:58.919 "hosts": [], 00:08:58.919 "serial_number": "SPDK00000000000004", 00:08:58.919 "model_number": "SPDK bdev Controller", 00:08:58.919 "max_namespaces": 32, 00:08:58.919 "min_cntlid": 1, 00:08:58.919 "max_cntlid": 65519, 00:08:58.919 "namespaces": [ 00:08:58.919 { 00:08:58.919 "nsid": 1, 00:08:58.919 "bdev_name": "Null4", 00:08:58.919 "name": "Null4", 00:08:58.919 "nguid": "397197D1AFC04FE7920A3660326DD0C7", 00:08:58.919 "uuid": "397197d1-afc0-4fe7-920a-3660326dd0c7" 00:08:58.919 } 00:08:58.919 ] 00:08:58.919 } 00:08:58.919 ] 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@42 -- # seq 1 4 00:08:58.919 17:19:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:58.919 17:19:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:58.919 17:19:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:58.919 17:19:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:58.919 17:19:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:58.919 17:19:07 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:58.919 17:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.919 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 17:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.919 17:19:07 -- target/discovery.sh@49 -- # check_bdevs= 00:08:58.919 17:19:07 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:58.919 17:19:07 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:58.919 17:19:07 -- target/discovery.sh@57 -- # nvmftestfini 00:08:58.919 17:19:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:58.919 17:19:07 -- nvmf/common.sh@116 -- # sync 00:08:58.919 17:19:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:58.919 17:19:07 -- nvmf/common.sh@119 -- # set +e 00:08:58.919 17:19:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:58.919 17:19:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:58.919 rmmod nvme_tcp 00:08:58.919 rmmod nvme_fabrics 00:08:58.919 rmmod nvme_keyring 00:08:58.919 17:19:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:58.919 17:19:07 -- nvmf/common.sh@123 -- # set -e 00:08:58.919 17:19:07 -- nvmf/common.sh@124 -- # return 0 00:08:58.919 17:19:07 -- nvmf/common.sh@477 -- # '[' -n 3021815 ']' 00:08:58.919 17:19:07 -- nvmf/common.sh@478 -- # killprocess 3021815 00:08:58.919 17:19:07 -- common/autotest_common.sh@926 -- # '[' -z 3021815 ']' 00:08:58.919 17:19:07 -- common/autotest_common.sh@930 -- # kill -0 3021815 00:08:58.919 17:19:07 -- common/autotest_common.sh@931 -- # uname 00:08:58.919 17:19:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:58.919 17:19:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3021815 00:08:59.180 17:19:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.180 17:19:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.180 17:19:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3021815' 00:08:59.180 killing process with pid 3021815 00:08:59.180 17:19:07 -- common/autotest_common.sh@945 -- # kill 3021815 00:08:59.180 [2024-10-13 17:19:07.460519] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:59.180 17:19:07 -- common/autotest_common.sh@950 -- # wait 3021815 00:08:59.180 17:19:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:59.180 17:19:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:59.180 17:19:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:59.180 17:19:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.180 17:19:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:59.180 17:19:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.180 17:19:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.180 17:19:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.724 17:19:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:01.724 00:09:01.724 real 0m10.904s 00:09:01.724 user 0m8.185s 00:09:01.724 sys 0m5.695s 00:09:01.724 17:19:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.724 17:19:09 -- common/autotest_common.sh@10 -- # set +x 00:09:01.724 ************************************ 00:09:01.724 END TEST nvmf_discovery 00:09:01.724 ************************************ 00:09:01.724 17:19:09 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:01.724 17:19:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:01.724 17:19:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.724 17:19:09 -- common/autotest_common.sh@10 -- # set +x 00:09:01.724 ************************************ 00:09:01.724 START TEST nvmf_referrals 00:09:01.724 ************************************ 00:09:01.724 17:19:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:01.724 * Looking for test storage... 00:09:01.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.724 17:19:09 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.724 17:19:09 -- nvmf/common.sh@7 -- # uname -s 00:09:01.724 17:19:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.724 17:19:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.724 17:19:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.724 17:19:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.724 17:19:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.724 17:19:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.724 17:19:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.724 17:19:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.725 17:19:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.725 17:19:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.725 17:19:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.725 17:19:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.725 17:19:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.725 17:19:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.725 17:19:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.725 17:19:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.725 17:19:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.725 17:19:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.725 17:19:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.725 17:19:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.725 17:19:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.725 17:19:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.725 17:19:09 -- paths/export.sh@5 -- # export PATH 00:09:01.725 17:19:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.725 17:19:09 -- nvmf/common.sh@46 -- # : 0 00:09:01.725 17:19:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:01.725 17:19:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:01.725 17:19:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:01.725 17:19:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.725 17:19:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.725 17:19:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:01.725 17:19:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:01.725 17:19:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:01.725 17:19:09 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:01.725 17:19:09 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:01.725 17:19:09 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:01.725 17:19:09 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:01.725 17:19:09 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:01.725 17:19:09 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:01.725 17:19:09 -- target/referrals.sh@37 -- # nvmftestinit 00:09:01.725 17:19:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:01.725 17:19:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.725 17:19:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:01.725 17:19:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:01.726 17:19:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:01.726 17:19:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.726 17:19:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.726 17:19:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.726 17:19:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:01.726 17:19:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:01.726 17:19:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:01.726 17:19:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.314 17:19:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:08.314 17:19:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:08.314 17:19:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:08.314 17:19:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:08.314 17:19:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:08.314 17:19:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:08.314 17:19:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:08.314 17:19:16 -- nvmf/common.sh@294 -- # net_devs=() 00:09:08.314 17:19:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:08.314 17:19:16 -- nvmf/common.sh@295 -- # e810=() 00:09:08.314 17:19:16 -- nvmf/common.sh@295 -- # local -ga e810 00:09:08.314 17:19:16 -- nvmf/common.sh@296 -- # x722=() 00:09:08.314 17:19:16 -- nvmf/common.sh@296 -- # local -ga x722 00:09:08.314 17:19:16 -- nvmf/common.sh@297 -- # mlx=() 00:09:08.314 17:19:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:08.314 17:19:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.314 17:19:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.315 17:19:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:08.315 17:19:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:08.315 17:19:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:08.315 17:19:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:08.315 17:19:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:08.315 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:08.315 17:19:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:08.315 17:19:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:08.315 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:08.315 17:19:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:08.315 17:19:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:08.315 17:19:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.315 17:19:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:08.315 17:19:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.315 17:19:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:08.315 Found net devices under 0000:31:00.0: cvl_0_0 00:09:08.315 17:19:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.315 17:19:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:08.315 17:19:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.315 17:19:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:08.315 17:19:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.315 17:19:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:08.315 Found net devices under 0000:31:00.1: cvl_0_1 00:09:08.315 17:19:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.315 17:19:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:08.315 17:19:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:08.315 17:19:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:08.315 17:19:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:08.315 17:19:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.315 17:19:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.315 17:19:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.315 17:19:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:08.315 17:19:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.315 17:19:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.315 17:19:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:08.315 17:19:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.315 17:19:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.315 17:19:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:08.315 17:19:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:08.315 17:19:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.576 17:19:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.576 17:19:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.576 17:19:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.576 17:19:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:08.576 17:19:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.837 17:19:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.837 17:19:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.838 17:19:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:08.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:09:08.838 00:09:08.838 --- 10.0.0.2 ping statistics --- 00:09:08.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.838 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:09:08.838 17:19:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:09:08.838 00:09:08.838 --- 10.0.0.1 ping statistics --- 00:09:08.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.838 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:09:08.838 17:19:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.838 17:19:17 -- nvmf/common.sh@410 -- # return 0 00:09:08.838 17:19:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:08.838 17:19:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.838 17:19:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:08.838 17:19:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:08.838 17:19:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.838 17:19:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:08.838 17:19:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:08.838 17:19:17 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:08.838 17:19:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:08.838 17:19:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:08.838 17:19:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.838 17:19:17 -- nvmf/common.sh@469 -- # nvmfpid=3026589 00:09:08.838 17:19:17 -- nvmf/common.sh@470 -- # waitforlisten 3026589 00:09:08.838 17:19:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.838 17:19:17 -- common/autotest_common.sh@819 -- # '[' -z 3026589 ']' 00:09:08.838 17:19:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.838 17:19:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.838 17:19:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.838 17:19:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.838 17:19:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.838 [2024-10-13 17:19:17.244682] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:08.838 [2024-10-13 17:19:17.244751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.838 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.838 [2024-10-13 17:19:17.319182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.838 [2024-10-13 17:19:17.356990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.838 [2024-10-13 17:19:17.357144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.838 [2024-10-13 17:19:17.357156] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.838 [2024-10-13 17:19:17.357165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.838 [2024-10-13 17:19:17.357241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.838 [2024-10-13 17:19:17.357367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.838 [2024-10-13 17:19:17.357534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.838 [2024-10-13 17:19:17.357534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.779 17:19:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.779 17:19:18 -- common/autotest_common.sh@852 -- # return 0 00:09:09.779 17:19:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:09.779 17:19:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 17:19:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.779 17:19:18 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 [2024-10-13 17:19:18.088382] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 [2024-10-13 17:19:18.104601] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.779 17:19:18 -- target/referrals.sh@48 -- # jq length 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:09.779 17:19:18 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:09.779 17:19:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:09.779 17:19:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.779 17:19:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:09.779 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.779 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:09.779 17:19:18 -- target/referrals.sh@21 -- # sort 00:09:09.779 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:09.779 17:19:18 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:09.779 17:19:18 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:09.779 17:19:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.779 17:19:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.779 17:19:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.779 17:19:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:09.779 17:19:18 -- target/referrals.sh@26 -- # sort 00:09:10.040 17:19:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:10.040 17:19:18 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:10.040 17:19:18 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:10.040 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.040 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.040 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.040 17:19:18 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:10.040 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.040 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.040 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.040 17:19:18 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:10.040 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.040 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.040 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.040 17:19:18 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:10.040 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.040 17:19:18 -- target/referrals.sh@56 -- # jq length 00:09:10.040 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.040 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.040 17:19:18 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:10.040 17:19:18 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:10.040 17:19:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:10.040 17:19:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:10.040 17:19:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.040 17:19:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:10.040 17:19:18 -- target/referrals.sh@26 -- # sort 00:09:10.300 17:19:18 -- target/referrals.sh@26 -- # echo 00:09:10.300 17:19:18 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:10.300 17:19:18 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:10.300 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.300 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.300 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.300 17:19:18 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:10.300 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.300 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.300 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.300 17:19:18 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:10.300 17:19:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:10.300 17:19:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:10.300 17:19:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:10.300 17:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.300 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.300 17:19:18 -- target/referrals.sh@21 -- # sort 00:09:10.300 17:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.300 17:19:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:10.300 17:19:18 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:10.300 17:19:18 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:10.300 17:19:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:10.300 17:19:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:10.300 17:19:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.300 17:19:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:10.300 17:19:18 -- target/referrals.sh@26 -- # sort 00:09:10.560 17:19:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:10.560 17:19:18 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:10.560 17:19:18 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:10.560 17:19:18 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:10.560 17:19:18 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:10.560 17:19:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.560 17:19:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:10.821 17:19:19 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:10.821 17:19:19 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:10.821 17:19:19 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:10.821 17:19:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:10.821 17:19:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.821 17:19:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:11.082 17:19:19 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:11.082 17:19:19 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:11.082 17:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.082 17:19:19 -- common/autotest_common.sh@10 -- # set +x 00:09:11.082 17:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.082 17:19:19 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:11.082 17:19:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:11.082 17:19:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.082 17:19:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:11.082 17:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.082 17:19:19 -- common/autotest_common.sh@10 -- # set +x 00:09:11.082 17:19:19 -- target/referrals.sh@21 -- # sort 00:09:11.082 17:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.082 17:19:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:11.082 17:19:19 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:11.082 17:19:19 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:11.082 17:19:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:11.082 17:19:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:11.082 17:19:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.082 17:19:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:11.082 17:19:19 -- target/referrals.sh@26 -- # sort 00:09:11.342 17:19:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:11.342 17:19:19 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:11.342 17:19:19 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:11.343 17:19:19 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:11.343 17:19:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:11.343 17:19:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.343 17:19:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:11.343 17:19:19 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:11.343 17:19:19 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:11.343 17:19:19 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:11.343 17:19:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:11.343 17:19:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.343 17:19:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:11.604 17:19:20 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:11.604 17:19:20 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:11.604 17:19:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.604 17:19:20 -- common/autotest_common.sh@10 -- # set +x 00:09:11.604 17:19:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.604 17:19:20 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.604 17:19:20 -- target/referrals.sh@82 -- # jq length 00:09:11.604 17:19:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.604 17:19:20 -- common/autotest_common.sh@10 -- # set +x 00:09:11.604 17:19:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.604 17:19:20 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:11.604 17:19:20 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:11.604 17:19:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:11.604 17:19:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:11.604 17:19:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.604 17:19:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:11.604 17:19:20 -- target/referrals.sh@26 -- # sort 00:09:11.865 17:19:20 -- target/referrals.sh@26 -- # echo 00:09:11.865 17:19:20 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:11.865 17:19:20 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:11.865 17:19:20 -- target/referrals.sh@86 -- # nvmftestfini 00:09:11.865 17:19:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:11.865 17:19:20 -- nvmf/common.sh@116 -- # sync 00:09:11.865 17:19:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:11.865 17:19:20 -- nvmf/common.sh@119 -- # set +e 00:09:11.865 17:19:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:11.865 17:19:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:11.865 rmmod nvme_tcp 00:09:11.865 rmmod nvme_fabrics 00:09:11.865 rmmod nvme_keyring 00:09:11.865 17:19:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:11.865 17:19:20 -- nvmf/common.sh@123 -- # set -e 00:09:11.865 17:19:20 -- nvmf/common.sh@124 -- # return 0 00:09:11.865 17:19:20 -- nvmf/common.sh@477 -- # '[' -n 3026589 ']' 00:09:11.865 17:19:20 -- nvmf/common.sh@478 -- # killprocess 3026589 00:09:11.865 17:19:20 -- common/autotest_common.sh@926 -- # '[' -z 3026589 ']' 00:09:11.865 17:19:20 -- common/autotest_common.sh@930 -- # kill -0 3026589 00:09:11.865 17:19:20 -- common/autotest_common.sh@931 -- # uname 00:09:11.865 17:19:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:11.865 17:19:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3026589 00:09:12.126 17:19:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:12.126 17:19:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:12.126 17:19:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3026589' 00:09:12.126 killing process with pid 3026589 00:09:12.126 17:19:20 -- common/autotest_common.sh@945 -- # kill 3026589 00:09:12.126 17:19:20 -- common/autotest_common.sh@950 -- # wait 3026589 00:09:12.126 17:19:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:12.126 17:19:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:12.126 17:19:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:12.126 17:19:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.126 17:19:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:12.126 17:19:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.126 17:19:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.126 17:19:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.671 17:19:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:14.671 00:09:14.671 real 0m12.941s 00:09:14.671 user 0m15.734s 00:09:14.671 sys 0m6.329s 00:09:14.671 17:19:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.671 17:19:22 -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 ************************************ 00:09:14.671 END TEST nvmf_referrals 00:09:14.671 ************************************ 00:09:14.671 17:19:22 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:14.671 17:19:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:14.671 17:19:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.671 17:19:22 -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 ************************************ 00:09:14.671 START TEST nvmf_connect_disconnect 00:09:14.671 ************************************ 00:09:14.671 17:19:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:14.671 * Looking for test storage... 00:09:14.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.671 17:19:22 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.671 17:19:22 -- nvmf/common.sh@7 -- # uname -s 00:09:14.671 17:19:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.671 17:19:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.671 17:19:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.671 17:19:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.671 17:19:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.671 17:19:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.671 17:19:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.671 17:19:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.671 17:19:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.671 17:19:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.671 17:19:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.671 17:19:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.671 17:19:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.671 17:19:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.671 17:19:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.671 17:19:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.671 17:19:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.671 17:19:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.671 17:19:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.671 17:19:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.671 17:19:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.671 17:19:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.671 17:19:22 -- paths/export.sh@5 -- # export PATH 00:09:14.671 17:19:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.671 17:19:22 -- nvmf/common.sh@46 -- # : 0 00:09:14.671 17:19:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:14.671 17:19:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:14.671 17:19:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:14.671 17:19:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.671 17:19:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.671 17:19:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:14.671 17:19:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:14.671 17:19:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:14.671 17:19:22 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.671 17:19:22 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.671 17:19:22 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:14.671 17:19:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:14.671 17:19:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.671 17:19:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:14.671 17:19:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:14.671 17:19:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:14.671 17:19:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.671 17:19:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.671 17:19:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.671 17:19:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:14.671 17:19:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:14.671 17:19:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:14.671 17:19:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.259 17:19:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:21.259 17:19:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:21.259 17:19:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:21.259 17:19:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:21.259 17:19:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:21.259 17:19:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:21.259 17:19:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:21.259 17:19:29 -- nvmf/common.sh@294 -- # net_devs=() 00:09:21.259 17:19:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:21.259 17:19:29 -- nvmf/common.sh@295 -- # e810=() 00:09:21.259 17:19:29 -- nvmf/common.sh@295 -- # local -ga e810 00:09:21.259 17:19:29 -- nvmf/common.sh@296 -- # x722=() 00:09:21.259 17:19:29 -- nvmf/common.sh@296 -- # local -ga x722 00:09:21.259 17:19:29 -- nvmf/common.sh@297 -- # mlx=() 00:09:21.259 17:19:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:21.259 17:19:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.259 17:19:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:21.259 17:19:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:21.259 17:19:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:21.259 17:19:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:21.259 17:19:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:21.259 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:21.259 17:19:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:21.259 17:19:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:21.259 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:21.259 17:19:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:21.259 17:19:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:21.259 17:19:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.259 17:19:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:21.259 17:19:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.259 17:19:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:21.259 Found net devices under 0000:31:00.0: cvl_0_0 00:09:21.259 17:19:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.259 17:19:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:21.259 17:19:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.259 17:19:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:21.259 17:19:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.259 17:19:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:21.259 Found net devices under 0000:31:00.1: cvl_0_1 00:09:21.259 17:19:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.259 17:19:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:21.259 17:19:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:21.259 17:19:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:21.259 17:19:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:21.259 17:19:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.259 17:19:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.259 17:19:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.259 17:19:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:21.259 17:19:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.259 17:19:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.259 17:19:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:21.259 17:19:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.259 17:19:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.259 17:19:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:21.520 17:19:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:21.520 17:19:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.520 17:19:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.520 17:19:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.520 17:19:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.520 17:19:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:21.520 17:19:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.781 17:19:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.781 17:19:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.781 17:19:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:21.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:09:21.781 00:09:21.781 --- 10.0.0.2 ping statistics --- 00:09:21.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.781 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:09:21.781 17:19:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:09:21.781 00:09:21.781 --- 10.0.0.1 ping statistics --- 00:09:21.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.781 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:21.781 17:19:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.781 17:19:30 -- nvmf/common.sh@410 -- # return 0 00:09:21.781 17:19:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:21.781 17:19:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.781 17:19:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:21.781 17:19:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:21.781 17:19:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.781 17:19:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:21.781 17:19:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:21.781 17:19:30 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:21.781 17:19:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:21.781 17:19:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:21.781 17:19:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.781 17:19:30 -- nvmf/common.sh@469 -- # nvmfpid=3031450 00:09:21.781 17:19:30 -- nvmf/common.sh@470 -- # waitforlisten 3031450 00:09:21.781 17:19:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.781 17:19:30 -- common/autotest_common.sh@819 -- # '[' -z 3031450 ']' 00:09:21.781 17:19:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.781 17:19:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.781 17:19:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.781 17:19:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.781 17:19:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.781 [2024-10-13 17:19:30.188072] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:21.781 [2024-10-13 17:19:30.188123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.781 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.781 [2024-10-13 17:19:30.255678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.781 [2024-10-13 17:19:30.284977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:21.781 [2024-10-13 17:19:30.285109] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.781 [2024-10-13 17:19:30.285119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.781 [2024-10-13 17:19:30.285127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.781 [2024-10-13 17:19:30.285190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.781 [2024-10-13 17:19:30.285322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.781 [2024-10-13 17:19:30.285477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.781 [2024-10-13 17:19:30.285479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.724 17:19:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:22.724 17:19:30 -- common/autotest_common.sh@852 -- # return 0 00:09:22.724 17:19:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:22.724 17:19:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:22.724 17:19:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 17:19:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:22.724 17:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.724 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 [2024-10-13 17:19:31.013397] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.724 17:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:22.724 17:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.724 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 17:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.724 17:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.724 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 17:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.724 17:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.724 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 17:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.724 17:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.724 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 [2024-10-13 17:19:31.068580] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.724 17:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:22.724 17:19:31 -- target/connect_disconnect.sh@34 -- # set +x 00:09:25.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.041 17:23:26 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:18.041 17:23:26 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:18.041 17:23:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:18.041 17:23:26 -- nvmf/common.sh@116 -- # sync 00:13:18.041 17:23:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:18.041 17:23:26 -- nvmf/common.sh@119 -- # set +e 00:13:18.041 17:23:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:18.041 17:23:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:18.041 rmmod nvme_tcp 00:13:18.041 rmmod nvme_fabrics 00:13:18.041 rmmod nvme_keyring 00:13:18.041 17:23:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:18.041 17:23:26 -- nvmf/common.sh@123 -- # set -e 00:13:18.041 17:23:26 -- nvmf/common.sh@124 -- # return 0 00:13:18.041 17:23:26 -- nvmf/common.sh@477 -- # '[' -n 3031450 ']' 00:13:18.041 17:23:26 -- nvmf/common.sh@478 -- # killprocess 3031450 00:13:18.042 17:23:26 -- common/autotest_common.sh@926 -- # '[' -z 3031450 ']' 00:13:18.042 17:23:26 -- common/autotest_common.sh@930 -- # kill -0 3031450 00:13:18.042 17:23:26 -- common/autotest_common.sh@931 -- # uname 00:13:18.042 17:23:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.042 17:23:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3031450 00:13:18.042 17:23:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.042 17:23:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.042 17:23:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3031450' 00:13:18.042 killing process with pid 3031450 00:13:18.042 17:23:26 -- common/autotest_common.sh@945 -- # kill 3031450 00:13:18.042 17:23:26 -- common/autotest_common.sh@950 -- # wait 3031450 00:13:18.042 17:23:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:18.042 17:23:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:18.042 17:23:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:18.042 17:23:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.042 17:23:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:18.042 17:23:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.042 17:23:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.042 17:23:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.952 17:23:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:19.952 00:13:19.952 real 4m5.723s 00:13:19.952 user 15m35.170s 00:13:19.952 sys 0m26.873s 00:13:19.952 17:23:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.952 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 ************************************ 00:13:19.952 END TEST nvmf_connect_disconnect 00:13:19.952 ************************************ 00:13:19.952 17:23:28 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:19.952 17:23:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:19.952 17:23:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.952 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 ************************************ 00:13:19.952 START TEST nvmf_multitarget 00:13:19.952 ************************************ 00:13:19.952 17:23:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:20.213 * Looking for test storage... 00:13:20.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.213 17:23:28 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.213 17:23:28 -- nvmf/common.sh@7 -- # uname -s 00:13:20.213 17:23:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.213 17:23:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.213 17:23:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.213 17:23:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.213 17:23:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.213 17:23:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.213 17:23:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.213 17:23:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.213 17:23:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.213 17:23:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.213 17:23:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:20.213 17:23:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:20.213 17:23:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.213 17:23:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.213 17:23:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.213 17:23:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.213 17:23:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.213 17:23:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.213 17:23:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.213 17:23:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.213 17:23:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.213 17:23:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.213 17:23:28 -- paths/export.sh@5 -- # export PATH 00:13:20.213 17:23:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.213 17:23:28 -- nvmf/common.sh@46 -- # : 0 00:13:20.213 17:23:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:20.213 17:23:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:20.213 17:23:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:20.213 17:23:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.213 17:23:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.213 17:23:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:20.213 17:23:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:20.213 17:23:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:20.213 17:23:28 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:20.213 17:23:28 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:20.213 17:23:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:20.213 17:23:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.213 17:23:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:20.213 17:23:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:20.213 17:23:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:20.213 17:23:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.213 17:23:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.213 17:23:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.213 17:23:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:20.213 17:23:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:20.213 17:23:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:20.213 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:28.349 17:23:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:28.349 17:23:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:28.349 17:23:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:28.349 17:23:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:28.349 17:23:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:28.349 17:23:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:28.349 17:23:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:28.349 17:23:35 -- nvmf/common.sh@294 -- # net_devs=() 00:13:28.349 17:23:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:28.349 17:23:35 -- nvmf/common.sh@295 -- # e810=() 00:13:28.349 17:23:35 -- nvmf/common.sh@295 -- # local -ga e810 00:13:28.349 17:23:35 -- nvmf/common.sh@296 -- # x722=() 00:13:28.349 17:23:35 -- nvmf/common.sh@296 -- # local -ga x722 00:13:28.349 17:23:35 -- nvmf/common.sh@297 -- # mlx=() 00:13:28.349 17:23:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:28.349 17:23:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.349 17:23:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:28.349 17:23:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:28.349 17:23:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:28.349 17:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.349 17:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:28.349 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:28.349 17:23:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.349 17:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:28.349 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:28.349 17:23:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:28.349 17:23:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:28.349 17:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.349 17:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.349 17:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.349 17:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.349 17:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:28.349 Found net devices under 0000:31:00.0: cvl_0_0 00:13:28.349 17:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.349 17:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.349 17:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.349 17:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.350 17:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.350 17:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:28.350 Found net devices under 0000:31:00.1: cvl_0_1 00:13:28.350 17:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.350 17:23:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:28.350 17:23:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:28.350 17:23:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:28.350 17:23:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:28.350 17:23:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:28.350 17:23:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.350 17:23:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.350 17:23:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.350 17:23:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:28.350 17:23:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.350 17:23:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.350 17:23:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:28.350 17:23:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.350 17:23:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.350 17:23:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:28.350 17:23:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:28.350 17:23:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.350 17:23:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.350 17:23:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.350 17:23:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.350 17:23:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:28.350 17:23:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.350 17:23:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.350 17:23:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.350 17:23:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:28.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.784 ms 00:13:28.350 00:13:28.350 --- 10.0.0.2 ping statistics --- 00:13:28.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.350 rtt min/avg/max/mdev = 0.784/0.784/0.784/0.000 ms 00:13:28.350 17:23:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:13:28.350 00:13:28.350 --- 10.0.0.1 ping statistics --- 00:13:28.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.350 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:13:28.350 17:23:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.350 17:23:35 -- nvmf/common.sh@410 -- # return 0 00:13:28.350 17:23:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.350 17:23:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.350 17:23:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.350 17:23:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.350 17:23:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.350 17:23:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.350 17:23:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.350 17:23:35 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:28.350 17:23:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.350 17:23:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.350 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:13:28.350 17:23:35 -- nvmf/common.sh@469 -- # nvmfpid=3084087 00:13:28.350 17:23:35 -- nvmf/common.sh@470 -- # waitforlisten 3084087 00:13:28.350 17:23:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.350 17:23:35 -- common/autotest_common.sh@819 -- # '[' -z 3084087 ']' 00:13:28.350 17:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.350 17:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:28.350 17:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.350 17:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:28.350 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:13:28.350 [2024-10-13 17:23:35.995477] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:28.350 [2024-10-13 17:23:35.995532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.350 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.350 [2024-10-13 17:23:36.064216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.350 [2024-10-13 17:23:36.093648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.350 [2024-10-13 17:23:36.093791] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.350 [2024-10-13 17:23:36.093802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.350 [2024-10-13 17:23:36.093812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.350 [2024-10-13 17:23:36.093987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.350 [2024-10-13 17:23:36.094122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.350 [2024-10-13 17:23:36.094182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.350 [2024-10-13 17:23:36.094184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.350 17:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.350 17:23:36 -- common/autotest_common.sh@852 -- # return 0 00:13:28.350 17:23:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:28.350 17:23:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.350 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:13:28.350 17:23:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.350 17:23:36 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.350 17:23:36 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:28.350 17:23:36 -- target/multitarget.sh@21 -- # jq length 00:13:28.612 17:23:36 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:28.612 17:23:36 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:28.612 "nvmf_tgt_1" 00:13:28.612 17:23:37 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:28.612 "nvmf_tgt_2" 00:13:28.872 17:23:37 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:28.872 17:23:37 -- target/multitarget.sh@28 -- # jq length 00:13:28.872 17:23:37 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:28.872 17:23:37 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:28.872 true 00:13:28.872 17:23:37 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:29.133 true 00:13:29.133 17:23:37 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:29.133 17:23:37 -- target/multitarget.sh@35 -- # jq length 00:13:29.133 17:23:37 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:29.133 17:23:37 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:29.133 17:23:37 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:29.133 17:23:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:29.133 17:23:37 -- nvmf/common.sh@116 -- # sync 00:13:29.133 17:23:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:29.133 17:23:37 -- nvmf/common.sh@119 -- # set +e 00:13:29.133 17:23:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:29.133 17:23:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:29.133 rmmod nvme_tcp 00:13:29.133 rmmod nvme_fabrics 00:13:29.133 rmmod nvme_keyring 00:13:29.133 17:23:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:29.133 17:23:37 -- nvmf/common.sh@123 -- # set -e 00:13:29.133 17:23:37 -- nvmf/common.sh@124 -- # return 0 00:13:29.133 17:23:37 -- nvmf/common.sh@477 -- # '[' -n 3084087 ']' 00:13:29.133 17:23:37 -- nvmf/common.sh@478 -- # killprocess 3084087 00:13:29.133 17:23:37 -- common/autotest_common.sh@926 -- # '[' -z 3084087 ']' 00:13:29.133 17:23:37 -- common/autotest_common.sh@930 -- # kill -0 3084087 00:13:29.133 17:23:37 -- common/autotest_common.sh@931 -- # uname 00:13:29.133 17:23:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:29.133 17:23:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3084087 00:13:29.393 17:23:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:29.393 17:23:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:29.393 17:23:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3084087' 00:13:29.393 killing process with pid 3084087 00:13:29.393 17:23:37 -- common/autotest_common.sh@945 -- # kill 3084087 00:13:29.393 17:23:37 -- common/autotest_common.sh@950 -- # wait 3084087 00:13:29.393 17:23:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:29.393 17:23:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:29.393 17:23:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:29.393 17:23:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.393 17:23:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:29.393 17:23:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.393 17:23:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.393 17:23:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.943 17:23:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:31.943 00:13:31.943 real 0m11.434s 00:13:31.943 user 0m9.773s 00:13:31.943 sys 0m5.829s 00:13:31.943 17:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.943 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:31.943 ************************************ 00:13:31.943 END TEST nvmf_multitarget 00:13:31.943 ************************************ 00:13:31.943 17:23:39 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:31.943 17:23:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:31.943 17:23:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.943 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:31.943 ************************************ 00:13:31.943 START TEST nvmf_rpc 00:13:31.943 ************************************ 00:13:31.943 17:23:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:31.943 * Looking for test storage... 00:13:31.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.943 17:23:40 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.943 17:23:40 -- nvmf/common.sh@7 -- # uname -s 00:13:31.943 17:23:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.943 17:23:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.943 17:23:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.943 17:23:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.943 17:23:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.943 17:23:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.943 17:23:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.943 17:23:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.943 17:23:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.943 17:23:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.943 17:23:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:31.943 17:23:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:31.943 17:23:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.943 17:23:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.943 17:23:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.943 17:23:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.943 17:23:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.943 17:23:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.943 17:23:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.943 17:23:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.943 17:23:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.943 17:23:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.943 17:23:40 -- paths/export.sh@5 -- # export PATH 00:13:31.943 17:23:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.943 17:23:40 -- nvmf/common.sh@46 -- # : 0 00:13:31.943 17:23:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:31.943 17:23:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:31.943 17:23:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:31.943 17:23:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.943 17:23:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.943 17:23:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:31.943 17:23:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:31.943 17:23:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:31.943 17:23:40 -- target/rpc.sh@11 -- # loops=5 00:13:31.943 17:23:40 -- target/rpc.sh@23 -- # nvmftestinit 00:13:31.943 17:23:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:31.943 17:23:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.943 17:23:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:31.943 17:23:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:31.943 17:23:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:31.943 17:23:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.943 17:23:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.943 17:23:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.943 17:23:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:31.943 17:23:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:31.943 17:23:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:31.943 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.074 17:23:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:40.074 17:23:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:40.074 17:23:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:40.074 17:23:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:40.074 17:23:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:40.074 17:23:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:40.074 17:23:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:40.074 17:23:47 -- nvmf/common.sh@294 -- # net_devs=() 00:13:40.074 17:23:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:40.074 17:23:47 -- nvmf/common.sh@295 -- # e810=() 00:13:40.074 17:23:47 -- nvmf/common.sh@295 -- # local -ga e810 00:13:40.074 17:23:47 -- nvmf/common.sh@296 -- # x722=() 00:13:40.074 17:23:47 -- nvmf/common.sh@296 -- # local -ga x722 00:13:40.074 17:23:47 -- nvmf/common.sh@297 -- # mlx=() 00:13:40.074 17:23:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:40.074 17:23:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.074 17:23:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:40.074 17:23:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:40.074 17:23:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:40.074 17:23:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:40.074 17:23:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:40.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:40.074 17:23:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:40.074 17:23:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:40.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:40.074 17:23:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:40.074 17:23:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:40.074 17:23:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.074 17:23:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:40.074 17:23:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.074 17:23:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:40.074 Found net devices under 0000:31:00.0: cvl_0_0 00:13:40.074 17:23:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.074 17:23:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:40.074 17:23:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.074 17:23:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:40.074 17:23:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.074 17:23:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:40.074 Found net devices under 0000:31:00.1: cvl_0_1 00:13:40.074 17:23:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.074 17:23:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:40.074 17:23:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:40.074 17:23:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:40.074 17:23:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.074 17:23:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.074 17:23:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.074 17:23:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:40.074 17:23:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.074 17:23:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.074 17:23:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:40.074 17:23:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.074 17:23:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.074 17:23:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:40.074 17:23:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:40.074 17:23:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.074 17:23:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.074 17:23:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.074 17:23:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.074 17:23:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:40.074 17:23:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.074 17:23:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.074 17:23:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.074 17:23:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:40.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:13:40.074 00:13:40.074 --- 10.0.0.2 ping statistics --- 00:13:40.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.074 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:13:40.074 17:23:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:13:40.074 00:13:40.074 --- 10.0.0.1 ping statistics --- 00:13:40.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.074 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:13:40.074 17:23:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.074 17:23:47 -- nvmf/common.sh@410 -- # return 0 00:13:40.074 17:23:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.074 17:23:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.074 17:23:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.074 17:23:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.074 17:23:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.074 17:23:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.074 17:23:47 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:40.074 17:23:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:40.074 17:23:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:40.074 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:40.074 17:23:47 -- nvmf/common.sh@469 -- # nvmfpid=3088577 00:13:40.074 17:23:47 -- nvmf/common.sh@470 -- # waitforlisten 3088577 00:13:40.074 17:23:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.074 17:23:47 -- common/autotest_common.sh@819 -- # '[' -z 3088577 ']' 00:13:40.074 17:23:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.074 17:23:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:40.074 17:23:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.074 17:23:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:40.074 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 [2024-10-13 17:23:47.629046] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:40.075 [2024-10-13 17:23:47.629122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.075 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.075 [2024-10-13 17:23:47.704561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.075 [2024-10-13 17:23:47.742488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.075 [2024-10-13 17:23:47.742640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.075 [2024-10-13 17:23:47.742656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.075 [2024-10-13 17:23:47.742664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.075 [2024-10-13 17:23:47.742842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.075 [2024-10-13 17:23:47.742965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.075 [2024-10-13 17:23:47.743111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.075 [2024-10-13 17:23:47.743112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.075 17:23:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:40.075 17:23:48 -- common/autotest_common.sh@852 -- # return 0 00:13:40.075 17:23:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:40.075 17:23:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:40.075 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 17:23:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.075 17:23:48 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:40.075 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.075 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.075 17:23:48 -- target/rpc.sh@26 -- # stats='{ 00:13:40.075 "tick_rate": 2400000000, 00:13:40.075 "poll_groups": [ 00:13:40.075 { 00:13:40.075 "name": "nvmf_tgt_poll_group_0", 00:13:40.075 "admin_qpairs": 0, 00:13:40.075 "io_qpairs": 0, 00:13:40.075 "current_admin_qpairs": 0, 00:13:40.075 "current_io_qpairs": 0, 00:13:40.075 "pending_bdev_io": 0, 00:13:40.075 "completed_nvme_io": 0, 00:13:40.075 "transports": [] 00:13:40.075 }, 00:13:40.075 { 00:13:40.075 "name": "nvmf_tgt_poll_group_1", 00:13:40.075 "admin_qpairs": 0, 00:13:40.075 "io_qpairs": 0, 00:13:40.075 "current_admin_qpairs": 0, 00:13:40.075 "current_io_qpairs": 0, 00:13:40.075 "pending_bdev_io": 0, 00:13:40.075 "completed_nvme_io": 0, 00:13:40.075 "transports": [] 00:13:40.075 }, 00:13:40.075 { 00:13:40.075 "name": "nvmf_tgt_poll_group_2", 00:13:40.075 "admin_qpairs": 0, 00:13:40.075 "io_qpairs": 0, 00:13:40.075 "current_admin_qpairs": 0, 00:13:40.075 "current_io_qpairs": 0, 00:13:40.075 "pending_bdev_io": 0, 00:13:40.075 "completed_nvme_io": 0, 00:13:40.075 "transports": [] 00:13:40.075 }, 00:13:40.075 { 00:13:40.075 "name": "nvmf_tgt_poll_group_3", 00:13:40.075 "admin_qpairs": 0, 00:13:40.075 "io_qpairs": 0, 00:13:40.075 "current_admin_qpairs": 0, 00:13:40.075 "current_io_qpairs": 0, 00:13:40.075 "pending_bdev_io": 0, 00:13:40.075 "completed_nvme_io": 0, 00:13:40.075 "transports": [] 00:13:40.075 } 00:13:40.075 ] 00:13:40.075 }' 00:13:40.075 17:23:48 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:40.075 17:23:48 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:40.075 17:23:48 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:40.075 17:23:48 -- target/rpc.sh@15 -- # wc -l 00:13:40.075 17:23:48 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:40.075 17:23:48 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:40.075 17:23:48 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:40.075 17:23:48 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.075 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.075 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 [2024-10-13 17:23:48.586765] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.075 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.075 17:23:48 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:40.335 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.335 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.335 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.335 17:23:48 -- target/rpc.sh@33 -- # stats='{ 00:13:40.335 "tick_rate": 2400000000, 00:13:40.335 "poll_groups": [ 00:13:40.335 { 00:13:40.335 "name": "nvmf_tgt_poll_group_0", 00:13:40.335 "admin_qpairs": 0, 00:13:40.335 "io_qpairs": 0, 00:13:40.335 "current_admin_qpairs": 0, 00:13:40.335 "current_io_qpairs": 0, 00:13:40.335 "pending_bdev_io": 0, 00:13:40.335 "completed_nvme_io": 0, 00:13:40.335 "transports": [ 00:13:40.335 { 00:13:40.335 "trtype": "TCP" 00:13:40.335 } 00:13:40.335 ] 00:13:40.335 }, 00:13:40.335 { 00:13:40.335 "name": "nvmf_tgt_poll_group_1", 00:13:40.335 "admin_qpairs": 0, 00:13:40.335 "io_qpairs": 0, 00:13:40.335 "current_admin_qpairs": 0, 00:13:40.335 "current_io_qpairs": 0, 00:13:40.335 "pending_bdev_io": 0, 00:13:40.335 "completed_nvme_io": 0, 00:13:40.335 "transports": [ 00:13:40.335 { 00:13:40.335 "trtype": "TCP" 00:13:40.335 } 00:13:40.335 ] 00:13:40.335 }, 00:13:40.335 { 00:13:40.335 "name": "nvmf_tgt_poll_group_2", 00:13:40.335 "admin_qpairs": 0, 00:13:40.335 "io_qpairs": 0, 00:13:40.335 "current_admin_qpairs": 0, 00:13:40.335 "current_io_qpairs": 0, 00:13:40.335 "pending_bdev_io": 0, 00:13:40.335 "completed_nvme_io": 0, 00:13:40.335 "transports": [ 00:13:40.335 { 00:13:40.335 "trtype": "TCP" 00:13:40.335 } 00:13:40.335 ] 00:13:40.335 }, 00:13:40.335 { 00:13:40.335 "name": "nvmf_tgt_poll_group_3", 00:13:40.335 "admin_qpairs": 0, 00:13:40.335 "io_qpairs": 0, 00:13:40.335 "current_admin_qpairs": 0, 00:13:40.335 "current_io_qpairs": 0, 00:13:40.335 "pending_bdev_io": 0, 00:13:40.335 "completed_nvme_io": 0, 00:13:40.335 "transports": [ 00:13:40.335 { 00:13:40.335 "trtype": "TCP" 00:13:40.335 } 00:13:40.335 ] 00:13:40.335 } 00:13:40.335 ] 00:13:40.335 }' 00:13:40.335 17:23:48 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:40.335 17:23:48 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:40.335 17:23:48 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:40.335 17:23:48 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:40.335 17:23:48 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:40.335 17:23:48 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:40.335 17:23:48 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:40.335 17:23:48 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:40.335 17:23:48 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:40.335 17:23:48 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:40.335 17:23:48 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:40.335 17:23:48 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:40.335 17:23:48 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:40.335 17:23:48 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:40.335 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.335 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.335 Malloc1 00:13:40.335 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.335 17:23:48 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:40.335 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.335 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.335 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.335 17:23:48 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.335 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.336 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.336 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.336 17:23:48 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:40.336 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.336 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.336 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.336 17:23:48 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.336 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.336 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.336 [2024-10-13 17:23:48.774617] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.336 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.336 17:23:48 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:40.336 17:23:48 -- common/autotest_common.sh@640 -- # local es=0 00:13:40.336 17:23:48 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:40.336 17:23:48 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:40.336 17:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:40.336 17:23:48 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:40.336 17:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:40.336 17:23:48 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:40.336 17:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:40.336 17:23:48 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:40.336 17:23:48 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:40.336 17:23:48 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:40.336 [2024-10-13 17:23:48.811135] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:40.336 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:40.336 could not add new controller: failed to write to nvme-fabrics device 00:13:40.336 17:23:48 -- common/autotest_common.sh@643 -- # es=1 00:13:40.336 17:23:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:40.336 17:23:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:40.336 17:23:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:40.336 17:23:48 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:40.336 17:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.336 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.336 17:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.336 17:23:48 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.248 17:23:50 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.248 17:23:50 -- common/autotest_common.sh@1177 -- # local i=0 00:13:42.248 17:23:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.248 17:23:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:42.248 17:23:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:44.155 17:23:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:44.155 17:23:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:44.155 17:23:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.155 17:23:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:44.155 17:23:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.155 17:23:52 -- common/autotest_common.sh@1187 -- # return 0 00:13:44.155 17:23:52 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.155 17:23:52 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.155 17:23:52 -- common/autotest_common.sh@1198 -- # local i=0 00:13:44.155 17:23:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:44.155 17:23:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.155 17:23:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:44.155 17:23:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.155 17:23:52 -- common/autotest_common.sh@1210 -- # return 0 00:13:44.155 17:23:52 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.155 17:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.155 17:23:52 -- common/autotest_common.sh@10 -- # set +x 00:13:44.155 17:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.155 17:23:52 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.155 17:23:52 -- common/autotest_common.sh@640 -- # local es=0 00:13:44.156 17:23:52 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.156 17:23:52 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:44.156 17:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:44.156 17:23:52 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:44.156 17:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:44.156 17:23:52 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:44.156 17:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:44.156 17:23:52 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:44.156 17:23:52 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:44.156 17:23:52 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.156 [2024-10-13 17:23:52.568740] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:44.156 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:44.156 could not add new controller: failed to write to nvme-fabrics device 00:13:44.156 17:23:52 -- common/autotest_common.sh@643 -- # es=1 00:13:44.156 17:23:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:44.156 17:23:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:44.156 17:23:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:44.156 17:23:52 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:44.156 17:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.156 17:23:52 -- common/autotest_common.sh@10 -- # set +x 00:13:44.156 17:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.156 17:23:52 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.064 17:23:54 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.064 17:23:54 -- common/autotest_common.sh@1177 -- # local i=0 00:13:46.064 17:23:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.064 17:23:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:46.064 17:23:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:47.973 17:23:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:47.974 17:23:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:47.974 17:23:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.974 17:23:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:47.974 17:23:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.974 17:23:56 -- common/autotest_common.sh@1187 -- # return 0 00:13:47.974 17:23:56 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.974 17:23:56 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.974 17:23:56 -- common/autotest_common.sh@1198 -- # local i=0 00:13:47.974 17:23:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:47.974 17:23:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.974 17:23:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:47.974 17:23:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.974 17:23:56 -- common/autotest_common.sh@1210 -- # return 0 00:13:47.974 17:23:56 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.974 17:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.974 17:23:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.974 17:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.974 17:23:56 -- target/rpc.sh@81 -- # seq 1 5 00:13:47.974 17:23:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.974 17:23:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.974 17:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.974 17:23:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.974 17:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.974 17:23:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.974 17:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.974 17:23:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.974 [2024-10-13 17:23:56.334166] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.974 17:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.974 17:23:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.974 17:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.974 17:23:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.974 17:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.974 17:23:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.974 17:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.974 17:23:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.974 17:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.974 17:23:56 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.357 17:23:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.357 17:23:57 -- common/autotest_common.sh@1177 -- # local i=0 00:13:49.357 17:23:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.357 17:23:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:49.357 17:23:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:51.895 17:23:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:51.895 17:23:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:51.895 17:23:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.895 17:23:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:51.895 17:23:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.895 17:23:59 -- common/autotest_common.sh@1187 -- # return 0 00:13:51.895 17:23:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.895 17:23:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.895 17:23:59 -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.895 17:23:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:51.895 17:23:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.895 17:23:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:51.895 17:23:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.895 17:23:59 -- common/autotest_common.sh@1210 -- # return 0 00:13:51.895 17:23:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.895 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.895 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:13:51.895 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.895 17:23:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.895 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.895 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:13:51.895 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.895 17:23:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:51.895 17:23:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.895 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.895 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:13:51.895 17:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.895 17:24:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.895 17:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.895 17:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.895 [2024-10-13 17:24:00.016369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.895 17:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.895 17:24:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:51.895 17:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.895 17:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.895 17:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.895 17:24:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.895 17:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.895 17:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.895 17:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.896 17:24:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.278 17:24:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.278 17:24:01 -- common/autotest_common.sh@1177 -- # local i=0 00:13:53.278 17:24:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.278 17:24:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:53.278 17:24:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:55.188 17:24:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:55.188 17:24:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:55.188 17:24:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.188 17:24:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:55.188 17:24:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.188 17:24:03 -- common/autotest_common.sh@1187 -- # return 0 00:13:55.188 17:24:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.188 17:24:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.188 17:24:03 -- common/autotest_common.sh@1198 -- # local i=0 00:13:55.188 17:24:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:55.188 17:24:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.449 17:24:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:55.449 17:24:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.449 17:24:03 -- common/autotest_common.sh@1210 -- # return 0 00:13:55.449 17:24:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.449 17:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.449 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 17:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.449 17:24:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.449 17:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.449 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 17:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.449 17:24:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:55.449 17:24:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.449 17:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.449 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 17:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.449 17:24:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.449 17:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.449 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 [2024-10-13 17:24:03.782136] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.449 17:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.449 17:24:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:55.449 17:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.449 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 17:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.449 17:24:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.449 17:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.449 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 17:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.449 17:24:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.832 17:24:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.832 17:24:05 -- common/autotest_common.sh@1177 -- # local i=0 00:13:56.832 17:24:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.832 17:24:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:56.832 17:24:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:59.374 17:24:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:59.374 17:24:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:59.374 17:24:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.374 17:24:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:59.374 17:24:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.374 17:24:07 -- common/autotest_common.sh@1187 -- # return 0 00:13:59.374 17:24:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.374 17:24:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.374 17:24:07 -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.374 17:24:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:59.374 17:24:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.374 17:24:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:59.374 17:24:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.374 17:24:07 -- common/autotest_common.sh@1210 -- # return 0 00:13:59.374 17:24:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.374 17:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.374 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.374 17:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.374 17:24:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.374 17:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.374 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.374 17:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.374 17:24:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:59.374 17:24:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.374 17:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.374 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.374 17:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.374 17:24:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.374 17:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.374 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.374 [2024-10-13 17:24:07.507221] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.374 17:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.375 17:24:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:59.375 17:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.375 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.375 17:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.375 17:24:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.375 17:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.375 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.375 17:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.375 17:24:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.766 17:24:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.766 17:24:09 -- common/autotest_common.sh@1177 -- # local i=0 00:14:00.766 17:24:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.766 17:24:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:00.766 17:24:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:02.679 17:24:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:02.679 17:24:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:02.679 17:24:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.679 17:24:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:02.679 17:24:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.679 17:24:11 -- common/autotest_common.sh@1187 -- # return 0 00:14:02.679 17:24:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.679 17:24:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.679 17:24:11 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.679 17:24:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.679 17:24:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.679 17:24:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.679 17:24:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.940 17:24:11 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.940 17:24:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.940 17:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.940 17:24:11 -- common/autotest_common.sh@10 -- # set +x 00:14:02.940 17:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.940 17:24:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.940 17:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.940 17:24:11 -- common/autotest_common.sh@10 -- # set +x 00:14:02.940 17:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.940 17:24:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:02.940 17:24:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.940 17:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.940 17:24:11 -- common/autotest_common.sh@10 -- # set +x 00:14:02.940 17:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.940 17:24:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.940 17:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.940 17:24:11 -- common/autotest_common.sh@10 -- # set +x 00:14:02.940 [2024-10-13 17:24:11.260121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.940 17:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.940 17:24:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:02.940 17:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.940 17:24:11 -- common/autotest_common.sh@10 -- # set +x 00:14:02.940 17:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.940 17:24:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.940 17:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.940 17:24:11 -- common/autotest_common.sh@10 -- # set +x 00:14:02.940 17:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.940 17:24:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.324 17:24:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.324 17:24:12 -- common/autotest_common.sh@1177 -- # local i=0 00:14:04.324 17:24:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.324 17:24:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:04.324 17:24:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:06.868 17:24:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:06.868 17:24:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:06.868 17:24:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.868 17:24:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:06.868 17:24:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.868 17:24:14 -- common/autotest_common.sh@1187 -- # return 0 00:14:06.868 17:24:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.868 17:24:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.868 17:24:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.868 17:24:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:06.868 17:24:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.868 17:24:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:06.868 17:24:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.868 17:24:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:06.868 17:24:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.868 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.868 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 17:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.868 17:24:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.868 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.868 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 17:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.868 17:24:14 -- target/rpc.sh@99 -- # seq 1 5 00:14:06.868 17:24:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.868 17:24:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.868 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.868 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 17:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.868 17:24:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.868 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.868 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 [2024-10-13 17:24:14.970570] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.869 17:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.869 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.869 17:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.869 17:24:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-10-13 17:24:15.034612] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.869 17:24:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-10-13 17:24:15.090769] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.869 17:24:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-10-13 17:24:15.150962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.869 17:24:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-10-13 17:24:15.215169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:06.869 17:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.869 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 17:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.869 17:24:15 -- target/rpc.sh@110 -- # stats='{ 00:14:06.869 "tick_rate": 2400000000, 00:14:06.869 "poll_groups": [ 00:14:06.869 { 00:14:06.869 "name": "nvmf_tgt_poll_group_0", 00:14:06.869 "admin_qpairs": 0, 00:14:06.869 "io_qpairs": 224, 00:14:06.869 "current_admin_qpairs": 0, 00:14:06.869 "current_io_qpairs": 0, 00:14:06.869 "pending_bdev_io": 0, 00:14:06.869 "completed_nvme_io": 249, 00:14:06.869 "transports": [ 00:14:06.869 { 00:14:06.869 "trtype": "TCP" 00:14:06.869 } 00:14:06.869 ] 00:14:06.869 }, 00:14:06.869 { 00:14:06.869 "name": "nvmf_tgt_poll_group_1", 00:14:06.869 "admin_qpairs": 1, 00:14:06.869 "io_qpairs": 223, 00:14:06.869 "current_admin_qpairs": 0, 00:14:06.869 "current_io_qpairs": 0, 00:14:06.869 "pending_bdev_io": 0, 00:14:06.869 "completed_nvme_io": 253, 00:14:06.869 "transports": [ 00:14:06.869 { 00:14:06.869 "trtype": "TCP" 00:14:06.869 } 00:14:06.869 ] 00:14:06.869 }, 00:14:06.869 { 00:14:06.869 "name": "nvmf_tgt_poll_group_2", 00:14:06.869 "admin_qpairs": 6, 00:14:06.869 "io_qpairs": 218, 00:14:06.869 "current_admin_qpairs": 0, 00:14:06.869 "current_io_qpairs": 0, 00:14:06.869 "pending_bdev_io": 0, 00:14:06.869 "completed_nvme_io": 289, 00:14:06.869 "transports": [ 00:14:06.869 { 00:14:06.869 "trtype": "TCP" 00:14:06.869 } 00:14:06.869 ] 00:14:06.869 }, 00:14:06.869 { 00:14:06.869 "name": "nvmf_tgt_poll_group_3", 00:14:06.869 "admin_qpairs": 0, 00:14:06.870 "io_qpairs": 224, 00:14:06.870 "current_admin_qpairs": 0, 00:14:06.870 "current_io_qpairs": 0, 00:14:06.870 "pending_bdev_io": 0, 00:14:06.870 "completed_nvme_io": 448, 00:14:06.870 "transports": [ 00:14:06.870 { 00:14:06.870 "trtype": "TCP" 00:14:06.870 } 00:14:06.870 ] 00:14:06.870 } 00:14:06.870 ] 00:14:06.870 }' 00:14:06.870 17:24:15 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:06.870 17:24:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:06.870 17:24:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:06.870 17:24:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.870 17:24:15 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:06.870 17:24:15 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:06.870 17:24:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:06.870 17:24:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:06.870 17:24:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.870 17:24:15 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:06.870 17:24:15 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:06.870 17:24:15 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:06.870 17:24:15 -- target/rpc.sh@123 -- # nvmftestfini 00:14:06.870 17:24:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:06.870 17:24:15 -- nvmf/common.sh@116 -- # sync 00:14:06.870 17:24:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:06.870 17:24:15 -- nvmf/common.sh@119 -- # set +e 00:14:06.870 17:24:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:06.870 17:24:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:06.870 rmmod nvme_tcp 00:14:07.130 rmmod nvme_fabrics 00:14:07.130 rmmod nvme_keyring 00:14:07.130 17:24:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:07.130 17:24:15 -- nvmf/common.sh@123 -- # set -e 00:14:07.130 17:24:15 -- nvmf/common.sh@124 -- # return 0 00:14:07.130 17:24:15 -- nvmf/common.sh@477 -- # '[' -n 3088577 ']' 00:14:07.130 17:24:15 -- nvmf/common.sh@478 -- # killprocess 3088577 00:14:07.130 17:24:15 -- common/autotest_common.sh@926 -- # '[' -z 3088577 ']' 00:14:07.130 17:24:15 -- common/autotest_common.sh@930 -- # kill -0 3088577 00:14:07.130 17:24:15 -- common/autotest_common.sh@931 -- # uname 00:14:07.130 17:24:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:07.130 17:24:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3088577 00:14:07.130 17:24:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:07.130 17:24:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:07.130 17:24:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3088577' 00:14:07.130 killing process with pid 3088577 00:14:07.130 17:24:15 -- common/autotest_common.sh@945 -- # kill 3088577 00:14:07.130 17:24:15 -- common/autotest_common.sh@950 -- # wait 3088577 00:14:07.130 17:24:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:07.130 17:24:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:07.130 17:24:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:07.130 17:24:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.130 17:24:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:07.130 17:24:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.130 17:24:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.130 17:24:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.673 17:24:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:09.673 00:14:09.673 real 0m37.779s 00:14:09.673 user 1m53.378s 00:14:09.673 sys 0m7.800s 00:14:09.673 17:24:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.673 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:14:09.673 ************************************ 00:14:09.673 END TEST nvmf_rpc 00:14:09.673 ************************************ 00:14:09.673 17:24:17 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:09.673 17:24:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:09.673 17:24:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.673 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:14:09.673 ************************************ 00:14:09.673 START TEST nvmf_invalid 00:14:09.673 ************************************ 00:14:09.673 17:24:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:09.673 * Looking for test storage... 00:14:09.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.673 17:24:17 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.673 17:24:17 -- nvmf/common.sh@7 -- # uname -s 00:14:09.673 17:24:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.673 17:24:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.673 17:24:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.673 17:24:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.673 17:24:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.673 17:24:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.673 17:24:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.673 17:24:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.673 17:24:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.673 17:24:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.673 17:24:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:09.673 17:24:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:09.673 17:24:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.673 17:24:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.673 17:24:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.673 17:24:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.673 17:24:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.673 17:24:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.673 17:24:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.673 17:24:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.673 17:24:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.673 17:24:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.673 17:24:17 -- paths/export.sh@5 -- # export PATH 00:14:09.673 17:24:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.673 17:24:17 -- nvmf/common.sh@46 -- # : 0 00:14:09.673 17:24:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:09.673 17:24:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:09.673 17:24:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:09.673 17:24:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.673 17:24:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.673 17:24:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:09.673 17:24:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:09.673 17:24:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:09.673 17:24:17 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:09.673 17:24:17 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.673 17:24:17 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:09.673 17:24:17 -- target/invalid.sh@14 -- # target=foobar 00:14:09.673 17:24:17 -- target/invalid.sh@16 -- # RANDOM=0 00:14:09.673 17:24:17 -- target/invalid.sh@34 -- # nvmftestinit 00:14:09.673 17:24:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:09.673 17:24:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.673 17:24:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:09.673 17:24:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:09.673 17:24:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:09.673 17:24:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.673 17:24:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.673 17:24:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.673 17:24:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:09.673 17:24:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:09.673 17:24:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:09.673 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:14:17.914 17:24:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:17.914 17:24:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:17.914 17:24:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:17.914 17:24:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:17.914 17:24:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:17.914 17:24:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:17.914 17:24:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:17.914 17:24:24 -- nvmf/common.sh@294 -- # net_devs=() 00:14:17.914 17:24:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:17.914 17:24:24 -- nvmf/common.sh@295 -- # e810=() 00:14:17.914 17:24:24 -- nvmf/common.sh@295 -- # local -ga e810 00:14:17.914 17:24:24 -- nvmf/common.sh@296 -- # x722=() 00:14:17.914 17:24:24 -- nvmf/common.sh@296 -- # local -ga x722 00:14:17.914 17:24:24 -- nvmf/common.sh@297 -- # mlx=() 00:14:17.914 17:24:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:17.914 17:24:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.914 17:24:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:17.914 17:24:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:17.914 17:24:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:17.914 17:24:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:17.914 17:24:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:17.914 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:17.914 17:24:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:17.914 17:24:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:17.914 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:17.914 17:24:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:17.914 17:24:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:17.914 17:24:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:17.914 17:24:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.914 17:24:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:17.914 17:24:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.914 17:24:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:17.915 Found net devices under 0000:31:00.0: cvl_0_0 00:14:17.915 17:24:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.915 17:24:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:17.915 17:24:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.915 17:24:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:17.915 17:24:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.915 17:24:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:17.915 Found net devices under 0000:31:00.1: cvl_0_1 00:14:17.915 17:24:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.915 17:24:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:17.915 17:24:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:17.915 17:24:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:17.915 17:24:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:17.915 17:24:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:17.915 17:24:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.915 17:24:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.915 17:24:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.915 17:24:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:17.915 17:24:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.915 17:24:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.915 17:24:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:17.915 17:24:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.915 17:24:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.915 17:24:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:17.915 17:24:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:17.915 17:24:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.915 17:24:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.915 17:24:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.915 17:24:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.915 17:24:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:17.915 17:24:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.915 17:24:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.915 17:24:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.915 17:24:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:17.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:14:17.915 00:14:17.915 --- 10.0.0.2 ping statistics --- 00:14:17.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.915 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:14:17.915 17:24:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:14:17.915 00:14:17.915 --- 10.0.0.1 ping statistics --- 00:14:17.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.915 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:17.915 17:24:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.915 17:24:25 -- nvmf/common.sh@410 -- # return 0 00:14:17.915 17:24:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:17.915 17:24:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.915 17:24:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:17.915 17:24:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:17.915 17:24:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.915 17:24:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:17.915 17:24:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:17.915 17:24:25 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:17.915 17:24:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:17.915 17:24:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:17.915 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:14:17.915 17:24:25 -- nvmf/common.sh@469 -- # nvmfpid=3099085 00:14:17.915 17:24:25 -- nvmf/common.sh@470 -- # waitforlisten 3099085 00:14:17.915 17:24:25 -- common/autotest_common.sh@819 -- # '[' -z 3099085 ']' 00:14:17.915 17:24:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.915 17:24:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:17.915 17:24:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.915 17:24:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:17.915 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:14:17.915 17:24:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.915 [2024-10-13 17:24:25.360933] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:17.915 [2024-10-13 17:24:25.360992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.915 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.915 [2024-10-13 17:24:25.434896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.915 [2024-10-13 17:24:25.472329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:17.915 [2024-10-13 17:24:25.472480] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.915 [2024-10-13 17:24:25.472495] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.915 [2024-10-13 17:24:25.472503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.915 [2024-10-13 17:24:25.472647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.915 [2024-10-13 17:24:25.472665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.915 [2024-10-13 17:24:25.472801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.915 [2024-10-13 17:24:25.472802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.915 17:24:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:17.915 17:24:26 -- common/autotest_common.sh@852 -- # return 0 00:14:17.915 17:24:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:17.915 17:24:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:17.915 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:14:17.915 17:24:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.915 17:24:26 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:17.915 17:24:26 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31723 00:14:17.915 [2024-10-13 17:24:26.337835] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:17.915 17:24:26 -- target/invalid.sh@40 -- # out='request: 00:14:17.915 { 00:14:17.915 "nqn": "nqn.2016-06.io.spdk:cnode31723", 00:14:17.915 "tgt_name": "foobar", 00:14:17.915 "method": "nvmf_create_subsystem", 00:14:17.915 "req_id": 1 00:14:17.915 } 00:14:17.915 Got JSON-RPC error response 00:14:17.915 response: 00:14:17.915 { 00:14:17.915 "code": -32603, 00:14:17.915 "message": "Unable to find target foobar" 00:14:17.915 }' 00:14:17.915 17:24:26 -- target/invalid.sh@41 -- # [[ request: 00:14:17.915 { 00:14:17.915 "nqn": "nqn.2016-06.io.spdk:cnode31723", 00:14:17.915 "tgt_name": "foobar", 00:14:17.915 "method": "nvmf_create_subsystem", 00:14:17.915 "req_id": 1 00:14:17.915 } 00:14:17.915 Got JSON-RPC error response 00:14:17.915 response: 00:14:17.915 { 00:14:17.915 "code": -32603, 00:14:17.915 "message": "Unable to find target foobar" 00:14:17.915 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:17.915 17:24:26 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:17.915 17:24:26 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6268 00:14:18.176 [2024-10-13 17:24:26.522474] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6268: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:18.176 17:24:26 -- target/invalid.sh@45 -- # out='request: 00:14:18.176 { 00:14:18.176 "nqn": "nqn.2016-06.io.spdk:cnode6268", 00:14:18.176 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:18.176 "method": "nvmf_create_subsystem", 00:14:18.176 "req_id": 1 00:14:18.176 } 00:14:18.176 Got JSON-RPC error response 00:14:18.176 response: 00:14:18.176 { 00:14:18.176 "code": -32602, 00:14:18.176 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:18.176 }' 00:14:18.176 17:24:26 -- target/invalid.sh@46 -- # [[ request: 00:14:18.176 { 00:14:18.176 "nqn": "nqn.2016-06.io.spdk:cnode6268", 00:14:18.176 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:18.176 "method": "nvmf_create_subsystem", 00:14:18.176 "req_id": 1 00:14:18.176 } 00:14:18.176 Got JSON-RPC error response 00:14:18.176 response: 00:14:18.176 { 00:14:18.176 "code": -32602, 00:14:18.176 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:18.176 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.176 17:24:26 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:18.176 17:24:26 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7790 00:14:18.437 [2024-10-13 17:24:26.703016] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7790: invalid model number 'SPDK_Controller' 00:14:18.437 17:24:26 -- target/invalid.sh@50 -- # out='request: 00:14:18.437 { 00:14:18.437 "nqn": "nqn.2016-06.io.spdk:cnode7790", 00:14:18.437 "model_number": "SPDK_Controller\u001f", 00:14:18.437 "method": "nvmf_create_subsystem", 00:14:18.437 "req_id": 1 00:14:18.437 } 00:14:18.437 Got JSON-RPC error response 00:14:18.437 response: 00:14:18.437 { 00:14:18.437 "code": -32602, 00:14:18.437 "message": "Invalid MN SPDK_Controller\u001f" 00:14:18.437 }' 00:14:18.437 17:24:26 -- target/invalid.sh@51 -- # [[ request: 00:14:18.437 { 00:14:18.437 "nqn": "nqn.2016-06.io.spdk:cnode7790", 00:14:18.437 "model_number": "SPDK_Controller\u001f", 00:14:18.437 "method": "nvmf_create_subsystem", 00:14:18.437 "req_id": 1 00:14:18.437 } 00:14:18.437 Got JSON-RPC error response 00:14:18.437 response: 00:14:18.437 { 00:14:18.437 "code": -32602, 00:14:18.437 "message": "Invalid MN SPDK_Controller\u001f" 00:14:18.437 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:18.437 17:24:26 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:18.437 17:24:26 -- target/invalid.sh@19 -- # local length=21 ll 00:14:18.437 17:24:26 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.437 17:24:26 -- target/invalid.sh@21 -- # local chars 00:14:18.437 17:24:26 -- target/invalid.sh@22 -- # local string 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 84 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+=T 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 84 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+=T 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 39 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+=\' 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 125 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+='}' 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 77 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+=M 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 82 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+=R 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # printf %x 83 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:18.437 17:24:26 -- target/invalid.sh@25 -- # string+=S 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.437 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 119 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=w 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 92 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+='\' 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 87 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=W 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 122 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=z 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 49 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=1 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 101 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=e 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 76 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=L 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 88 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=X 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 38 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+='&' 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 125 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+='}' 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 97 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=a 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 32 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=' ' 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 122 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=z 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # printf %x 39 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:18.438 17:24:26 -- target/invalid.sh@25 -- # string+=\' 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.438 17:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.438 17:24:26 -- target/invalid.sh@28 -- # [[ T == \- ]] 00:14:18.438 17:24:26 -- target/invalid.sh@31 -- # echo 'TT'\''}MRSw\Wz1eLX&}a z'\''' 00:14:18.438 17:24:26 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'TT'\''}MRSw\Wz1eLX&}a z'\''' nqn.2016-06.io.spdk:cnode7601 00:14:18.699 [2024-10-13 17:24:27.040084] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7601: invalid serial number 'TT'}MRSw\Wz1eLX&}a z'' 00:14:18.699 17:24:27 -- target/invalid.sh@54 -- # out='request: 00:14:18.699 { 00:14:18.699 "nqn": "nqn.2016-06.io.spdk:cnode7601", 00:14:18.699 "serial_number": "TT'\''}MRSw\\Wz1eLX&}a z'\''", 00:14:18.699 "method": "nvmf_create_subsystem", 00:14:18.699 "req_id": 1 00:14:18.699 } 00:14:18.699 Got JSON-RPC error response 00:14:18.699 response: 00:14:18.699 { 00:14:18.699 "code": -32602, 00:14:18.699 "message": "Invalid SN TT'\''}MRSw\\Wz1eLX&}a z'\''" 00:14:18.699 }' 00:14:18.699 17:24:27 -- target/invalid.sh@55 -- # [[ request: 00:14:18.699 { 00:14:18.699 "nqn": "nqn.2016-06.io.spdk:cnode7601", 00:14:18.699 "serial_number": "TT'}MRSw\\Wz1eLX&}a z'", 00:14:18.699 "method": "nvmf_create_subsystem", 00:14:18.699 "req_id": 1 00:14:18.699 } 00:14:18.699 Got JSON-RPC error response 00:14:18.699 response: 00:14:18.699 { 00:14:18.699 "code": -32602, 00:14:18.699 "message": "Invalid SN TT'}MRSw\\Wz1eLX&}a z'" 00:14:18.699 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.699 17:24:27 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:18.699 17:24:27 -- target/invalid.sh@19 -- # local length=41 ll 00:14:18.699 17:24:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.699 17:24:27 -- target/invalid.sh@21 -- # local chars 00:14:18.699 17:24:27 -- target/invalid.sh@22 -- # local string 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 62 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+='>' 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 64 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=@ 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 84 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=T 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 57 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=9 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 87 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=W 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 88 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=X 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 100 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=d 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 109 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=m 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 75 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=K 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 126 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+='~' 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 101 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=e 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 112 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=p 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 105 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=i 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 75 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=K 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 115 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=s 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 56 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=8 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 65 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=A 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 98 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=b 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 102 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # string+=f 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.699 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # printf %x 117 00:14:18.699 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=u 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 63 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+='?' 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 34 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+='"' 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 58 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=: 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 117 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=u 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 32 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=' ' 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 84 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=T 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 110 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=n 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 41 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=')' 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 116 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=t 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # printf %x 112 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:18.961 17:24:27 -- target/invalid.sh@25 -- # string+=p 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.961 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 81 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=Q 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 71 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=G 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 113 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=q 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 66 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=B 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 57 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=9 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 84 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=T 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 98 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=b 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 58 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=: 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 44 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=, 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 67 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=C 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # printf %x 97 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:18.962 17:24:27 -- target/invalid.sh@25 -- # string+=a 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.962 17:24:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.962 17:24:27 -- target/invalid.sh@28 -- # [[ > == \- ]] 00:14:18.962 17:24:27 -- target/invalid.sh@31 -- # echo '>@T9WXdmK~epiKs8Abfu?":u Tn)tpQGqB9Tb:,Ca' 00:14:18.962 17:24:27 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>@T9WXdmK~epiKs8Abfu?":u Tn)tpQGqB9Tb:,Ca' nqn.2016-06.io.spdk:cnode22738 00:14:19.223 [2024-10-13 17:24:27.521643] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22738: invalid model number '>@T9WXdmK~epiKs8Abfu?":u Tn)tpQGqB9Tb:,Ca' 00:14:19.223 17:24:27 -- target/invalid.sh@58 -- # out='request: 00:14:19.223 { 00:14:19.223 "nqn": "nqn.2016-06.io.spdk:cnode22738", 00:14:19.223 "model_number": ">@T9WXdmK~epiKs8Abfu?\":u Tn)tpQGqB9Tb:,Ca", 00:14:19.223 "method": "nvmf_create_subsystem", 00:14:19.223 "req_id": 1 00:14:19.223 } 00:14:19.223 Got JSON-RPC error response 00:14:19.223 response: 00:14:19.223 { 00:14:19.223 "code": -32602, 00:14:19.223 "message": "Invalid MN >@T9WXdmK~epiKs8Abfu?\":u Tn)tpQGqB9Tb:,Ca" 00:14:19.223 }' 00:14:19.223 17:24:27 -- target/invalid.sh@59 -- # [[ request: 00:14:19.223 { 00:14:19.223 "nqn": "nqn.2016-06.io.spdk:cnode22738", 00:14:19.223 "model_number": ">@T9WXdmK~epiKs8Abfu?\":u Tn)tpQGqB9Tb:,Ca", 00:14:19.223 "method": "nvmf_create_subsystem", 00:14:19.223 "req_id": 1 00:14:19.223 } 00:14:19.223 Got JSON-RPC error response 00:14:19.223 response: 00:14:19.223 { 00:14:19.223 "code": -32602, 00:14:19.223 "message": "Invalid MN >@T9WXdmK~epiKs8Abfu?\":u Tn)tpQGqB9Tb:,Ca" 00:14:19.223 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:19.223 17:24:27 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:19.223 [2024-10-13 17:24:27.698267] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.223 17:24:27 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:19.486 17:24:27 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:19.486 17:24:27 -- target/invalid.sh@67 -- # echo '' 00:14:19.486 17:24:27 -- target/invalid.sh@67 -- # head -n 1 00:14:19.486 17:24:27 -- target/invalid.sh@67 -- # IP= 00:14:19.486 17:24:27 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:19.745 [2024-10-13 17:24:28.047426] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:19.745 17:24:28 -- target/invalid.sh@69 -- # out='request: 00:14:19.745 { 00:14:19.746 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:19.746 "listen_address": { 00:14:19.746 "trtype": "tcp", 00:14:19.746 "traddr": "", 00:14:19.746 "trsvcid": "4421" 00:14:19.746 }, 00:14:19.746 "method": "nvmf_subsystem_remove_listener", 00:14:19.746 "req_id": 1 00:14:19.746 } 00:14:19.746 Got JSON-RPC error response 00:14:19.746 response: 00:14:19.746 { 00:14:19.746 "code": -32602, 00:14:19.746 "message": "Invalid parameters" 00:14:19.746 }' 00:14:19.746 17:24:28 -- target/invalid.sh@70 -- # [[ request: 00:14:19.746 { 00:14:19.746 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:19.746 "listen_address": { 00:14:19.746 "trtype": "tcp", 00:14:19.746 "traddr": "", 00:14:19.746 "trsvcid": "4421" 00:14:19.746 }, 00:14:19.746 "method": "nvmf_subsystem_remove_listener", 00:14:19.746 "req_id": 1 00:14:19.746 } 00:14:19.746 Got JSON-RPC error response 00:14:19.746 response: 00:14:19.746 { 00:14:19.746 "code": -32602, 00:14:19.746 "message": "Invalid parameters" 00:14:19.746 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:19.746 17:24:28 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20357 -i 0 00:14:19.746 [2024-10-13 17:24:28.227977] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20357: invalid cntlid range [0-65519] 00:14:19.746 17:24:28 -- target/invalid.sh@73 -- # out='request: 00:14:19.746 { 00:14:19.746 "nqn": "nqn.2016-06.io.spdk:cnode20357", 00:14:19.746 "min_cntlid": 0, 00:14:19.746 "method": "nvmf_create_subsystem", 00:14:19.746 "req_id": 1 00:14:19.746 } 00:14:19.746 Got JSON-RPC error response 00:14:19.746 response: 00:14:19.746 { 00:14:19.746 "code": -32602, 00:14:19.746 "message": "Invalid cntlid range [0-65519]" 00:14:19.746 }' 00:14:19.746 17:24:28 -- target/invalid.sh@74 -- # [[ request: 00:14:19.746 { 00:14:19.746 "nqn": "nqn.2016-06.io.spdk:cnode20357", 00:14:19.746 "min_cntlid": 0, 00:14:19.746 "method": "nvmf_create_subsystem", 00:14:19.746 "req_id": 1 00:14:19.746 } 00:14:19.746 Got JSON-RPC error response 00:14:19.746 response: 00:14:19.746 { 00:14:19.746 "code": -32602, 00:14:19.746 "message": "Invalid cntlid range [0-65519]" 00:14:19.746 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.746 17:24:28 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8277 -i 65520 00:14:20.006 [2024-10-13 17:24:28.408587] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8277: invalid cntlid range [65520-65519] 00:14:20.006 17:24:28 -- target/invalid.sh@75 -- # out='request: 00:14:20.006 { 00:14:20.006 "nqn": "nqn.2016-06.io.spdk:cnode8277", 00:14:20.006 "min_cntlid": 65520, 00:14:20.006 "method": "nvmf_create_subsystem", 00:14:20.006 "req_id": 1 00:14:20.006 } 00:14:20.006 Got JSON-RPC error response 00:14:20.006 response: 00:14:20.006 { 00:14:20.006 "code": -32602, 00:14:20.006 "message": "Invalid cntlid range [65520-65519]" 00:14:20.006 }' 00:14:20.006 17:24:28 -- target/invalid.sh@76 -- # [[ request: 00:14:20.006 { 00:14:20.006 "nqn": "nqn.2016-06.io.spdk:cnode8277", 00:14:20.006 "min_cntlid": 65520, 00:14:20.006 "method": "nvmf_create_subsystem", 00:14:20.006 "req_id": 1 00:14:20.006 } 00:14:20.006 Got JSON-RPC error response 00:14:20.006 response: 00:14:20.006 { 00:14:20.006 "code": -32602, 00:14:20.006 "message": "Invalid cntlid range [65520-65519]" 00:14:20.006 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.006 17:24:28 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15381 -I 0 00:14:20.266 [2024-10-13 17:24:28.585227] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15381: invalid cntlid range [1-0] 00:14:20.266 17:24:28 -- target/invalid.sh@77 -- # out='request: 00:14:20.266 { 00:14:20.266 "nqn": "nqn.2016-06.io.spdk:cnode15381", 00:14:20.266 "max_cntlid": 0, 00:14:20.266 "method": "nvmf_create_subsystem", 00:14:20.266 "req_id": 1 00:14:20.266 } 00:14:20.266 Got JSON-RPC error response 00:14:20.266 response: 00:14:20.266 { 00:14:20.266 "code": -32602, 00:14:20.266 "message": "Invalid cntlid range [1-0]" 00:14:20.266 }' 00:14:20.266 17:24:28 -- target/invalid.sh@78 -- # [[ request: 00:14:20.267 { 00:14:20.267 "nqn": "nqn.2016-06.io.spdk:cnode15381", 00:14:20.267 "max_cntlid": 0, 00:14:20.267 "method": "nvmf_create_subsystem", 00:14:20.267 "req_id": 1 00:14:20.267 } 00:14:20.267 Got JSON-RPC error response 00:14:20.267 response: 00:14:20.267 { 00:14:20.267 "code": -32602, 00:14:20.267 "message": "Invalid cntlid range [1-0]" 00:14:20.267 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.267 17:24:28 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3182 -I 65520 00:14:20.267 [2024-10-13 17:24:28.757795] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3182: invalid cntlid range [1-65520] 00:14:20.267 17:24:28 -- target/invalid.sh@79 -- # out='request: 00:14:20.267 { 00:14:20.267 "nqn": "nqn.2016-06.io.spdk:cnode3182", 00:14:20.267 "max_cntlid": 65520, 00:14:20.267 "method": "nvmf_create_subsystem", 00:14:20.267 "req_id": 1 00:14:20.267 } 00:14:20.267 Got JSON-RPC error response 00:14:20.267 response: 00:14:20.267 { 00:14:20.267 "code": -32602, 00:14:20.267 "message": "Invalid cntlid range [1-65520]" 00:14:20.267 }' 00:14:20.267 17:24:28 -- target/invalid.sh@80 -- # [[ request: 00:14:20.267 { 00:14:20.267 "nqn": "nqn.2016-06.io.spdk:cnode3182", 00:14:20.267 "max_cntlid": 65520, 00:14:20.267 "method": "nvmf_create_subsystem", 00:14:20.267 "req_id": 1 00:14:20.267 } 00:14:20.267 Got JSON-RPC error response 00:14:20.267 response: 00:14:20.267 { 00:14:20.267 "code": -32602, 00:14:20.267 "message": "Invalid cntlid range [1-65520]" 00:14:20.267 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.527 17:24:28 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9727 -i 6 -I 5 00:14:20.527 [2024-10-13 17:24:28.934407] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9727: invalid cntlid range [6-5] 00:14:20.527 17:24:28 -- target/invalid.sh@83 -- # out='request: 00:14:20.527 { 00:14:20.527 "nqn": "nqn.2016-06.io.spdk:cnode9727", 00:14:20.527 "min_cntlid": 6, 00:14:20.527 "max_cntlid": 5, 00:14:20.527 "method": "nvmf_create_subsystem", 00:14:20.527 "req_id": 1 00:14:20.527 } 00:14:20.527 Got JSON-RPC error response 00:14:20.527 response: 00:14:20.527 { 00:14:20.527 "code": -32602, 00:14:20.527 "message": "Invalid cntlid range [6-5]" 00:14:20.527 }' 00:14:20.527 17:24:28 -- target/invalid.sh@84 -- # [[ request: 00:14:20.527 { 00:14:20.527 "nqn": "nqn.2016-06.io.spdk:cnode9727", 00:14:20.527 "min_cntlid": 6, 00:14:20.527 "max_cntlid": 5, 00:14:20.527 "method": "nvmf_create_subsystem", 00:14:20.527 "req_id": 1 00:14:20.527 } 00:14:20.527 Got JSON-RPC error response 00:14:20.527 response: 00:14:20.527 { 00:14:20.527 "code": -32602, 00:14:20.527 "message": "Invalid cntlid range [6-5]" 00:14:20.527 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.527 17:24:28 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:20.787 17:24:29 -- target/invalid.sh@87 -- # out='request: 00:14:20.787 { 00:14:20.787 "name": "foobar", 00:14:20.787 "method": "nvmf_delete_target", 00:14:20.787 "req_id": 1 00:14:20.787 } 00:14:20.787 Got JSON-RPC error response 00:14:20.787 response: 00:14:20.787 { 00:14:20.787 "code": -32602, 00:14:20.787 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:20.787 }' 00:14:20.787 17:24:29 -- target/invalid.sh@88 -- # [[ request: 00:14:20.787 { 00:14:20.787 "name": "foobar", 00:14:20.787 "method": "nvmf_delete_target", 00:14:20.787 "req_id": 1 00:14:20.787 } 00:14:20.787 Got JSON-RPC error response 00:14:20.787 response: 00:14:20.787 { 00:14:20.787 "code": -32602, 00:14:20.787 "message": "The specified target doesn't exist, cannot delete it." 00:14:20.787 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:20.787 17:24:29 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:20.787 17:24:29 -- target/invalid.sh@91 -- # nvmftestfini 00:14:20.787 17:24:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:20.787 17:24:29 -- nvmf/common.sh@116 -- # sync 00:14:20.787 17:24:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:20.787 17:24:29 -- nvmf/common.sh@119 -- # set +e 00:14:20.787 17:24:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:20.787 17:24:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:20.787 rmmod nvme_tcp 00:14:20.787 rmmod nvme_fabrics 00:14:20.787 rmmod nvme_keyring 00:14:20.787 17:24:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:20.787 17:24:29 -- nvmf/common.sh@123 -- # set -e 00:14:20.787 17:24:29 -- nvmf/common.sh@124 -- # return 0 00:14:20.787 17:24:29 -- nvmf/common.sh@477 -- # '[' -n 3099085 ']' 00:14:20.787 17:24:29 -- nvmf/common.sh@478 -- # killprocess 3099085 00:14:20.787 17:24:29 -- common/autotest_common.sh@926 -- # '[' -z 3099085 ']' 00:14:20.787 17:24:29 -- common/autotest_common.sh@930 -- # kill -0 3099085 00:14:20.787 17:24:29 -- common/autotest_common.sh@931 -- # uname 00:14:20.787 17:24:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.787 17:24:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3099085 00:14:20.788 17:24:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.788 17:24:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.788 17:24:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3099085' 00:14:20.788 killing process with pid 3099085 00:14:20.788 17:24:29 -- common/autotest_common.sh@945 -- # kill 3099085 00:14:20.788 17:24:29 -- common/autotest_common.sh@950 -- # wait 3099085 00:14:21.053 17:24:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:21.053 17:24:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:21.053 17:24:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:21.053 17:24:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.053 17:24:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:21.053 17:24:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.053 17:24:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.054 17:24:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.967 17:24:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:22.967 00:14:22.967 real 0m13.646s 00:14:22.967 user 0m19.724s 00:14:22.967 sys 0m6.412s 00:14:22.967 17:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.967 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 ************************************ 00:14:22.967 END TEST nvmf_invalid 00:14:22.967 ************************************ 00:14:22.967 17:24:31 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:22.967 17:24:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:22.967 17:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.967 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 ************************************ 00:14:22.967 START TEST nvmf_abort 00:14:22.967 ************************************ 00:14:22.967 17:24:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:23.228 * Looking for test storage... 00:14:23.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.228 17:24:31 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.228 17:24:31 -- nvmf/common.sh@7 -- # uname -s 00:14:23.228 17:24:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.228 17:24:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.228 17:24:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.228 17:24:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.228 17:24:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.228 17:24:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.228 17:24:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.228 17:24:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.228 17:24:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.228 17:24:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.228 17:24:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.228 17:24:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.228 17:24:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.228 17:24:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.228 17:24:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.228 17:24:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.228 17:24:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.228 17:24:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.228 17:24:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.228 17:24:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.228 17:24:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.228 17:24:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.228 17:24:31 -- paths/export.sh@5 -- # export PATH 00:14:23.228 17:24:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.228 17:24:31 -- nvmf/common.sh@46 -- # : 0 00:14:23.228 17:24:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.228 17:24:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.228 17:24:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.228 17:24:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.228 17:24:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.228 17:24:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.228 17:24:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.228 17:24:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.228 17:24:31 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.228 17:24:31 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:23.228 17:24:31 -- target/abort.sh@14 -- # nvmftestinit 00:14:23.228 17:24:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.228 17:24:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.228 17:24:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.228 17:24:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.228 17:24:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.228 17:24:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.228 17:24:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.228 17:24:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.228 17:24:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:23.228 17:24:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:23.228 17:24:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:23.228 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:14:31.366 17:24:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:31.366 17:24:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:31.366 17:24:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:31.366 17:24:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:31.366 17:24:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:31.366 17:24:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:31.366 17:24:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:31.366 17:24:38 -- nvmf/common.sh@294 -- # net_devs=() 00:14:31.366 17:24:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:31.366 17:24:38 -- nvmf/common.sh@295 -- # e810=() 00:14:31.366 17:24:38 -- nvmf/common.sh@295 -- # local -ga e810 00:14:31.366 17:24:38 -- nvmf/common.sh@296 -- # x722=() 00:14:31.366 17:24:38 -- nvmf/common.sh@296 -- # local -ga x722 00:14:31.366 17:24:38 -- nvmf/common.sh@297 -- # mlx=() 00:14:31.366 17:24:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:31.366 17:24:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.366 17:24:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.367 17:24:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.367 17:24:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:31.367 17:24:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:31.367 17:24:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:31.367 17:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:31.367 17:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:31.367 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:31.367 17:24:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:31.367 17:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:31.367 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:31.367 17:24:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:31.367 17:24:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:31.367 17:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.367 17:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:31.367 17:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.367 17:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:31.367 Found net devices under 0000:31:00.0: cvl_0_0 00:14:31.367 17:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.367 17:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:31.367 17:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.367 17:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:31.367 17:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.367 17:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:31.367 Found net devices under 0000:31:00.1: cvl_0_1 00:14:31.367 17:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.367 17:24:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:31.367 17:24:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:31.367 17:24:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:31.367 17:24:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.367 17:24:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.367 17:24:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.367 17:24:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:31.367 17:24:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.367 17:24:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.367 17:24:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:31.367 17:24:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.367 17:24:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.367 17:24:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:31.367 17:24:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:31.367 17:24:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.367 17:24:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.367 17:24:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.367 17:24:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.367 17:24:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:31.367 17:24:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.367 17:24:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.367 17:24:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.367 17:24:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:31.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:14:31.367 00:14:31.367 --- 10.0.0.2 ping statistics --- 00:14:31.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.367 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:14:31.367 17:24:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:14:31.367 00:14:31.367 --- 10.0.0.1 ping statistics --- 00:14:31.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.367 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:14:31.367 17:24:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.367 17:24:38 -- nvmf/common.sh@410 -- # return 0 00:14:31.367 17:24:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.367 17:24:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.367 17:24:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.367 17:24:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.367 17:24:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.367 17:24:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.367 17:24:38 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:31.367 17:24:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.367 17:24:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:31.367 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 17:24:38 -- nvmf/common.sh@469 -- # nvmfpid=3104339 00:14:31.367 17:24:38 -- nvmf/common.sh@470 -- # waitforlisten 3104339 00:14:31.367 17:24:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:31.367 17:24:38 -- common/autotest_common.sh@819 -- # '[' -z 3104339 ']' 00:14:31.367 17:24:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.367 17:24:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:31.367 17:24:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.367 17:24:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:31.367 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 [2024-10-13 17:24:38.899926] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:31.367 [2024-10-13 17:24:38.899990] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.367 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.367 [2024-10-13 17:24:38.990548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.367 [2024-10-13 17:24:39.036463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.367 [2024-10-13 17:24:39.036617] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.367 [2024-10-13 17:24:39.036628] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.367 [2024-10-13 17:24:39.036638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.367 [2024-10-13 17:24:39.036808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.367 [2024-10-13 17:24:39.036970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.367 [2024-10-13 17:24:39.036971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.367 17:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.367 17:24:39 -- common/autotest_common.sh@852 -- # return 0 00:14:31.367 17:24:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.367 17:24:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 17:24:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.367 17:24:39 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 [2024-10-13 17:24:39.740322] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 Malloc0 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 Delay0 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 [2024-10-13 17:24:39.818439] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.367 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.367 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.367 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.367 17:24:39 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:31.367 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.628 [2024-10-13 17:24:39.898008] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:33.542 Initializing NVMe Controllers 00:14:33.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:33.542 controller IO queue size 128 less than required 00:14:33.542 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:33.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:33.542 Initialization complete. Launching workers. 00:14:33.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34229 00:14:33.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34294, failed to submit 62 00:14:33.542 success 34229, unsuccess 65, failed 0 00:14:33.542 17:24:41 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.542 17:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.542 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.542 17:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.542 17:24:41 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:33.542 17:24:41 -- target/abort.sh@38 -- # nvmftestfini 00:14:33.542 17:24:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:33.542 17:24:41 -- nvmf/common.sh@116 -- # sync 00:14:33.542 17:24:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:33.542 17:24:41 -- nvmf/common.sh@119 -- # set +e 00:14:33.542 17:24:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:33.542 17:24:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:33.542 rmmod nvme_tcp 00:14:33.542 rmmod nvme_fabrics 00:14:33.542 rmmod nvme_keyring 00:14:33.542 17:24:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:33.542 17:24:42 -- nvmf/common.sh@123 -- # set -e 00:14:33.542 17:24:42 -- nvmf/common.sh@124 -- # return 0 00:14:33.542 17:24:42 -- nvmf/common.sh@477 -- # '[' -n 3104339 ']' 00:14:33.542 17:24:42 -- nvmf/common.sh@478 -- # killprocess 3104339 00:14:33.542 17:24:42 -- common/autotest_common.sh@926 -- # '[' -z 3104339 ']' 00:14:33.542 17:24:42 -- common/autotest_common.sh@930 -- # kill -0 3104339 00:14:33.542 17:24:42 -- common/autotest_common.sh@931 -- # uname 00:14:33.542 17:24:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:33.542 17:24:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3104339 00:14:33.803 17:24:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:33.803 17:24:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:33.803 17:24:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3104339' 00:14:33.803 killing process with pid 3104339 00:14:33.803 17:24:42 -- common/autotest_common.sh@945 -- # kill 3104339 00:14:33.803 17:24:42 -- common/autotest_common.sh@950 -- # wait 3104339 00:14:33.803 17:24:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:33.803 17:24:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:33.803 17:24:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:33.803 17:24:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.803 17:24:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:33.803 17:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.803 17:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.803 17:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.348 17:24:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:36.348 00:14:36.348 real 0m12.847s 00:14:36.348 user 0m13.222s 00:14:36.348 sys 0m6.340s 00:14:36.348 17:24:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.348 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.348 ************************************ 00:14:36.348 END TEST nvmf_abort 00:14:36.348 ************************************ 00:14:36.348 17:24:44 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:36.348 17:24:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:36.348 17:24:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:36.348 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.348 ************************************ 00:14:36.348 START TEST nvmf_ns_hotplug_stress 00:14:36.348 ************************************ 00:14:36.348 17:24:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:36.348 * Looking for test storage... 00:14:36.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.348 17:24:44 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.348 17:24:44 -- nvmf/common.sh@7 -- # uname -s 00:14:36.348 17:24:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.348 17:24:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.348 17:24:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.348 17:24:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.348 17:24:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.348 17:24:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.348 17:24:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.348 17:24:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.348 17:24:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.348 17:24:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.348 17:24:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:36.348 17:24:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:36.348 17:24:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.348 17:24:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.348 17:24:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.348 17:24:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.348 17:24:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.348 17:24:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.348 17:24:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.348 17:24:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.348 17:24:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.349 17:24:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.349 17:24:44 -- paths/export.sh@5 -- # export PATH 00:14:36.349 17:24:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.349 17:24:44 -- nvmf/common.sh@46 -- # : 0 00:14:36.349 17:24:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:36.349 17:24:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:36.349 17:24:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:36.349 17:24:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.349 17:24:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.349 17:24:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:36.349 17:24:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:36.349 17:24:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:36.349 17:24:44 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.349 17:24:44 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:36.349 17:24:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:36.349 17:24:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.349 17:24:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:36.349 17:24:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:36.349 17:24:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:36.349 17:24:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.349 17:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.349 17:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.349 17:24:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:36.349 17:24:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:36.349 17:24:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:36.349 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:14:44.488 17:24:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:44.488 17:24:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:44.488 17:24:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:44.488 17:24:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:44.488 17:24:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:44.488 17:24:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:44.488 17:24:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:44.488 17:24:51 -- nvmf/common.sh@294 -- # net_devs=() 00:14:44.488 17:24:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:44.488 17:24:51 -- nvmf/common.sh@295 -- # e810=() 00:14:44.488 17:24:51 -- nvmf/common.sh@295 -- # local -ga e810 00:14:44.488 17:24:51 -- nvmf/common.sh@296 -- # x722=() 00:14:44.488 17:24:51 -- nvmf/common.sh@296 -- # local -ga x722 00:14:44.488 17:24:51 -- nvmf/common.sh@297 -- # mlx=() 00:14:44.488 17:24:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:44.488 17:24:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.488 17:24:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:44.488 17:24:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:44.488 17:24:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:44.488 17:24:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:44.488 17:24:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:44.488 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:44.488 17:24:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:44.488 17:24:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:44.488 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:44.488 17:24:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:44.488 17:24:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:44.488 17:24:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.488 17:24:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:44.488 17:24:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.488 17:24:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:44.488 Found net devices under 0000:31:00.0: cvl_0_0 00:14:44.488 17:24:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.488 17:24:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:44.488 17:24:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.488 17:24:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:44.488 17:24:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.488 17:24:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:44.488 Found net devices under 0000:31:00.1: cvl_0_1 00:14:44.488 17:24:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.488 17:24:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:44.488 17:24:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:44.488 17:24:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:44.488 17:24:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.488 17:24:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.488 17:24:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.488 17:24:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:44.488 17:24:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.488 17:24:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.488 17:24:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:44.488 17:24:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.488 17:24:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.488 17:24:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:44.488 17:24:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:44.488 17:24:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.488 17:24:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.488 17:24:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.488 17:24:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.488 17:24:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:44.488 17:24:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.488 17:24:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.488 17:24:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.488 17:24:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:44.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:14:44.488 00:14:44.488 --- 10.0.0.2 ping statistics --- 00:14:44.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.488 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:14:44.488 17:24:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:14:44.488 00:14:44.488 --- 10.0.0.1 ping statistics --- 00:14:44.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.488 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:14:44.488 17:24:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.488 17:24:51 -- nvmf/common.sh@410 -- # return 0 00:14:44.488 17:24:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:44.488 17:24:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.488 17:24:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:44.488 17:24:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.488 17:24:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:44.488 17:24:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:44.488 17:24:51 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:44.488 17:24:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:44.488 17:24:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:44.488 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:14:44.488 17:24:51 -- nvmf/common.sh@469 -- # nvmfpid=3109122 00:14:44.488 17:24:51 -- nvmf/common.sh@470 -- # waitforlisten 3109122 00:14:44.488 17:24:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:44.488 17:24:51 -- common/autotest_common.sh@819 -- # '[' -z 3109122 ']' 00:14:44.488 17:24:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.488 17:24:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.488 17:24:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.488 17:24:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.489 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 [2024-10-13 17:24:51.887617] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:44.489 [2024-10-13 17:24:51.887676] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.489 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.489 [2024-10-13 17:24:51.955993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:44.489 [2024-10-13 17:24:51.997575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:44.489 [2024-10-13 17:24:51.997703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.489 [2024-10-13 17:24:51.997713] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.489 [2024-10-13 17:24:51.997719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.489 [2024-10-13 17:24:51.997858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.489 [2024-10-13 17:24:51.998021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.489 [2024-10-13 17:24:51.998022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.489 17:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.489 17:24:52 -- common/autotest_common.sh@852 -- # return 0 00:14:44.489 17:24:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:44.489 17:24:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:44.489 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 17:24:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.489 17:24:52 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:44.489 17:24:52 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:44.489 [2024-10-13 17:24:52.941597] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.489 17:24:52 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:44.748 17:24:53 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.007 [2024-10-13 17:24:53.286799] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.007 17:24:53 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.007 17:24:53 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:45.267 Malloc0 00:14:45.268 17:24:53 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:45.529 Delay0 00:14:45.529 17:24:53 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.529 17:24:54 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:45.790 NULL1 00:14:45.790 17:24:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:46.050 17:24:54 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3109732 00:14:46.050 17:24:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:46.050 17:24:54 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:46.050 17:24:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.050 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.050 17:24:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.310 17:24:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:46.310 17:24:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:46.570 true 00:14:46.570 17:24:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:46.570 17:24:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.570 17:24:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.830 17:24:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:46.830 17:24:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:47.091 true 00:14:47.091 17:24:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:47.091 17:24:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.091 17:24:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.352 17:24:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:47.352 17:24:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:47.612 true 00:14:47.612 17:24:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:47.612 17:24:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.612 17:24:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.873 17:24:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:47.873 17:24:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:48.133 true 00:14:48.133 17:24:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:48.133 17:24:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.133 17:24:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.393 17:24:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:48.393 17:24:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:48.393 true 00:14:48.654 17:24:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:48.654 17:24:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.654 17:24:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.914 17:24:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:48.914 17:24:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:48.914 true 00:14:48.914 17:24:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:48.914 17:24:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.176 17:24:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.436 17:24:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:49.436 17:24:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:49.436 true 00:14:49.436 17:24:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:49.436 17:24:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.377 Read completed with error (sct=0, sc=11) 00:14:50.377 17:24:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:50.638 17:24:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:50.638 17:24:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:50.638 true 00:14:50.638 17:24:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:50.638 17:24:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.899 17:24:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.160 17:24:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:51.160 17:24:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:51.160 true 00:14:51.160 17:24:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:51.160 17:24:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.420 17:24:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.681 17:24:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:51.681 17:24:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:51.681 true 00:14:51.681 17:25:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:51.681 17:25:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.942 17:25:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.202 17:25:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:52.202 17:25:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:52.202 true 00:14:52.202 17:25:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:52.202 17:25:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.461 17:25:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.721 17:25:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:52.721 17:25:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:52.721 true 00:14:52.721 17:25:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:52.721 17:25:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.981 17:25:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.242 17:25:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:53.242 17:25:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:53.242 true 00:14:53.242 17:25:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:53.242 17:25:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.505 17:25:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.505 [2024-10-13 17:25:02.010927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.010985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.011976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.012989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.505 [2024-10-13 17:25:02.013546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.013996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.014720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.015984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.506 [2024-10-13 17:25:02.016695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.016984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.017999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.018975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.507 [2024-10-13 17:25:02.019771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.019987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.020610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.021992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.022978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.508 [2024-10-13 17:25:02.023423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.023968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.024979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.509 [2024-10-13 17:25:02.025970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.025997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.026979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.510 [2024-10-13 17:25:02.027240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.027999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.028997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.029994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.797 [2024-10-13 17:25:02.030206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.030978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:53.798 [2024-10-13 17:25:02.031757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.031992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.032980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.798 [2024-10-13 17:25:02.033520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.033977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.034775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.035996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.799 [2024-10-13 17:25:02.036398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.036885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.037978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.038996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.800 [2024-10-13 17:25:02.039706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.039998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 17:25:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:53.801 [2024-10-13 17:25:02.040056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 17:25:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:53.801 [2024-10-13 17:25:02.040478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.040976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.041988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.801 [2024-10-13 17:25:02.042207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.042977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.043989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.044993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.802 [2024-10-13 17:25:02.045436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.045979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.046989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.047995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.803 [2024-10-13 17:25:02.048416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.048970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.049986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.050998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.804 [2024-10-13 17:25:02.051708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.051992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.052990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.053998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.805 [2024-10-13 17:25:02.054590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.054979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.055985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.056983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.806 [2024-10-13 17:25:02.057310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.057994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.058988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.059898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.807 [2024-10-13 17:25:02.060835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.060863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.060897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.060923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.060957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.060988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.061983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.062979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.808 [2024-10-13 17:25:02.063765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.063983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.064995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 Message suppressed 999 times: [2024-10-13 17:25:02.065198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 Read completed with error (sct=0, sc=15) 00:14:53.809 [2024-10-13 17:25:02.065235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.065974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.066980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.809 [2024-10-13 17:25:02.067191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.067997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.068986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.069983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.070018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.070047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.070080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.070109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.810 [2024-10-13 17:25:02.070140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.070995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.071977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.072981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.811 [2024-10-13 17:25:02.073308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.073968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.074905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.075981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.812 [2024-10-13 17:25:02.076850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.076899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.076930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.076973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.077989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.078976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.079984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.813 [2024-10-13 17:25:02.080430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.080981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.081970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.082959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.083014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.083046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.083083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.083116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.083145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.814 [2024-10-13 17:25:02.083177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.083987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.084992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.815 [2024-10-13 17:25:02.085951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.085975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.085999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.086985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.087991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.088998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.816 [2024-10-13 17:25:02.089267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.089999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.090974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.091943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.092298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.092327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.817 [2024-10-13 17:25:02.092362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.092989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.093999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.094979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.818 [2024-10-13 17:25:02.095400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.095992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.096994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.097983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.819 [2024-10-13 17:25:02.098670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.098975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:53.820 [2024-10-13 17:25:02.099111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.099979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.820 [2024-10-13 17:25:02.100832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.100862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.100900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.100931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.100966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.100998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.101839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.102991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.103987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.104024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.821 [2024-10-13 17:25:02.104120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.104970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.105999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.106992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.822 [2024-10-13 17:25:02.107301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.107994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.108990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.823 [2024-10-13 17:25:02.109798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.109999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.110997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.111969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.112970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.113000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.113030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.113067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.824 [2024-10-13 17:25:02.113100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.113984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.114977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.825 [2024-10-13 17:25:02.115365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.115991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.116994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.117967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.118000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.118030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.118076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.118110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.826 [2024-10-13 17:25:02.118159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.118830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.119985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.120988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.827 [2024-10-13 17:25:02.121667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.121978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.122994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.123953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.828 [2024-10-13 17:25:02.124316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.124971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.125981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.126974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.127002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.127035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.127068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.829 [2024-10-13 17:25:02.127099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.127991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.128975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.129003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.129034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.129070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.129094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.129118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.830 [2024-10-13 17:25:02.129141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.129376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.130984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.131992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:53.831 [2024-10-13 17:25:02.132398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.831 [2024-10-13 17:25:02.132702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.132982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.133996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.134997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.832 [2024-10-13 17:25:02.135265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.135992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.136987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.137981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.833 [2024-10-13 17:25:02.138345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.138984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.139833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.140991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.834 [2024-10-13 17:25:02.141472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.141981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.142757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.143973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.835 [2024-10-13 17:25:02.144570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.144986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.145973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.146999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.836 [2024-10-13 17:25:02.147538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.147993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.148999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.149997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.837 [2024-10-13 17:25:02.150928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.151981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.152948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.153984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.154007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.154036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.838 [2024-10-13 17:25:02.154061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.154938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.155993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.156977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.839 [2024-10-13 17:25:02.157592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.157990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.158980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.159987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.840 [2024-10-13 17:25:02.160804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.160834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.160861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.160888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.160919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.160949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.160979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.161969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.162993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.163977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.164009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.841 [2024-10-13 17:25:02.164043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.164984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.165979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:53.842 [2024-10-13 17:25:02.166834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.166987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.842 [2024-10-13 17:25:02.167865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.167903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.167933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.167969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.168987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.169981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.843 [2024-10-13 17:25:02.170350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.170978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.171992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.172970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.844 [2024-10-13 17:25:02.173297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.173981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.174989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.175986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.845 [2024-10-13 17:25:02.176908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.176936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.176967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.176996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.177983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.178672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.179999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.846 [2024-10-13 17:25:02.180445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.180987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.181976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.182974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.183996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.847 [2024-10-13 17:25:02.184027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.184993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.185716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.848 [2024-10-13 17:25:02.186525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.186983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.187981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.188984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 true 00:14:53.849 [2024-10-13 17:25:02.189226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.849 [2024-10-13 17:25:02.189532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.189989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.190972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.191632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.850 [2024-10-13 17:25:02.192932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.192955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.192978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.193918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.194998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.851 [2024-10-13 17:25:02.195840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.195874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.195905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.195933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.195961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.196989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.197983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.198976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.199006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.199037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.199070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.199099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.199130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.852 [2024-10-13 17:25:02.199161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.199975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:53.853 [2024-10-13 17:25:02.200791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.200976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.853 [2024-10-13 17:25:02.201637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.201985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.202976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.203980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.204987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.854 [2024-10-13 17:25:02.205010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.205979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.206999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.207978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.855 [2024-10-13 17:25:02.208384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.208991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.209954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.210984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.856 [2024-10-13 17:25:02.211909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.211945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.211975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.212985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.213999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 17:25:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:53.857 [2024-10-13 17:25:02.214796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 [2024-10-13 17:25:02.214855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.857 17:25:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.857 [2024-10-13 17:25:02.215400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.215984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.216968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.217981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.858 [2024-10-13 17:25:02.218233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.218989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.219976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.220987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.859 [2024-10-13 17:25:02.221365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.221999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.222996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.223987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.860 [2024-10-13 17:25:02.224986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.225993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.226995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.227973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.861 [2024-10-13 17:25:02.228197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.228996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.229987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.230994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.231018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.862 [2024-10-13 17:25:02.231041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.231988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.232983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:53.863 [2024-10-13 17:25:02.233217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.233988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.863 [2024-10-13 17:25:02.234680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.234985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.235937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.236989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.237976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.864 [2024-10-13 17:25:02.238000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.238967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.239977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.240978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.865 [2024-10-13 17:25:02.241343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.241867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.242980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.243772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.866 [2024-10-13 17:25:02.244432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.244995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.245991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.246999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.867 [2024-10-13 17:25:02.247963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.247996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.248978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.249970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.250993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.868 [2024-10-13 17:25:02.251392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.251995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.252999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.253998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.869 [2024-10-13 17:25:02.254970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.254999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.255996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.256989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.870 [2024-10-13 17:25:02.257595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.257980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.258982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.259995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.260771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.871 [2024-10-13 17:25:02.261402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.261989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.262910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.263980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.872 [2024-10-13 17:25:02.264740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.264976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 Message suppressed 999 times: [2024-10-13 17:25:02.265798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 Read completed with error (sct=0, sc=15) 00:14:53.873 [2024-10-13 17:25:02.265830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.265973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.266995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.267996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.873 [2024-10-13 17:25:02.268332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.268976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.269995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.270992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.874 [2024-10-13 17:25:02.271812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.271843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.271875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.271903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.271928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.271955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.271984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.272993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.273389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.274974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.875 [2024-10-13 17:25:02.275254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.275990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.276980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.277947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.876 [2024-10-13 17:25:02.278679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.278984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.279999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.280998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.877 [2024-10-13 17:25:02.281857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.281891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.281920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.281952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.281983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.282991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.283984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.284976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.285001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.285024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.878 [2024-10-13 17:25:02.285048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.285803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.286994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.287979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.879 [2024-10-13 17:25:02.288686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.288980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.289990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.290985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.880 [2024-10-13 17:25:02.291858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.291881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.291904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.291928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.292982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.293991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.294985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.881 [2024-10-13 17:25:02.295804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.295835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.295865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.295895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.295931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.295962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.295993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:53.882 [2024-10-13 17:25:02.296318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.296978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.180 [2024-10-13 17:25:02.297177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.297991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.298946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.299823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.299856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.299886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.299917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.299950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.299981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 Message suppressed 999 times: [2024-10-13 17:25:02.300233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 Read completed with error (sct=0, sc=15) 00:14:54.181 [2024-10-13 17:25:02.300265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.300960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.181 [2024-10-13 17:25:02.301208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.301985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.302982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.303998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.182 [2024-10-13 17:25:02.304436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.304835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.305421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.306975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.307970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.183 [2024-10-13 17:25:02.308475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.308993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.309979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.310994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.184 [2024-10-13 17:25:02.311284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.311976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.312979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.313989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.185 [2024-10-13 17:25:02.314215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.314790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.315978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.316990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.186 [2024-10-13 17:25:02.317687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.317720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.317746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.318992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.319986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.320975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.187 [2024-10-13 17:25:02.321364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.321994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.322978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.323974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.188 [2024-10-13 17:25:02.324942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.324966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.324995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.325975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.326995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.327979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.328012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.328038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.189 [2024-10-13 17:25:02.328075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.328982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.329931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.330995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.190 [2024-10-13 17:25:02.331788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.331984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.332992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.333987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.334985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.335017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.335048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.335089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.335121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 [2024-10-13 17:25:02.335156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.191 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.192 [2024-10-13 17:25:02.335189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.335979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.336937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.337989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.192 [2024-10-13 17:25:02.338503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.338995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.339969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.340812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.193 [2024-10-13 17:25:02.341435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.341971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.342932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.343989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.194 [2024-10-13 17:25:02.344474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.344980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.345979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.346973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.347992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.348016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.348040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.348070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.195 [2024-10-13 17:25:02.348094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.348979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.349977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.350981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.196 [2024-10-13 17:25:02.351395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.351987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.352978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.353997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.197 [2024-10-13 17:25:02.354702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.354971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.355987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.356961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.357836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.358055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.358085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.358108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.358135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.198 [2024-10-13 17:25:02.358164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.358988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.359890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.360969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.199 [2024-10-13 17:25:02.361454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.361989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.362991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.363986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.364186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 17:25:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.200 [2024-10-13 17:25:02.534282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.200 [2024-10-13 17:25:02.534769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.534989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.535991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.536974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.537963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.538000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.201 [2024-10-13 17:25:02.538032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.538996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.539780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.540991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.541021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.202 [2024-10-13 17:25:02.541048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.541858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.542987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.543995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.203 [2024-10-13 17:25:02.544496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.544979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.545991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.546978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.204 [2024-10-13 17:25:02.547786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.547994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.548971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.549961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.550978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.205 [2024-10-13 17:25:02.551293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.551981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.552999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.553993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.206 [2024-10-13 17:25:02.554026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.554990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.555984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.556970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.207 [2024-10-13 17:25:02.557280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.557982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.558977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.559978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.208 [2024-10-13 17:25:02.560971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.560995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.561999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.562988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.563949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.209 [2024-10-13 17:25:02.564339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.564844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.210 [2024-10-13 17:25:02.565182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.565996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.566849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.210 [2024-10-13 17:25:02.567254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.567971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 17:25:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:54.211 [2024-10-13 17:25:02.567995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 17:25:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:54.211 [2024-10-13 17:25:02.568379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.568989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.569992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.570028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.570057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.570098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.211 [2024-10-13 17:25:02.570131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.570976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.571994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.572943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.212 [2024-10-13 17:25:02.573906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.573937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.573964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.573991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.574950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.575975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.576977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.213 [2024-10-13 17:25:02.577361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.577987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.578974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.579986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.214 [2024-10-13 17:25:02.580380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.580982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.581995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.582999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.215 [2024-10-13 17:25:02.583210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.583986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.584992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.585986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.216 [2024-10-13 17:25:02.586753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.586986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.587988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.588983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.589762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.217 [2024-10-13 17:25:02.590254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.590986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.591984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.592972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.218 [2024-10-13 17:25:02.593530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.593993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.594990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.595989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.219 [2024-10-13 17:25:02.596849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.596879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.596906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.596938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.596970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.597990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 Message suppressed 999 times: [2024-10-13 17:25:02.598140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 Read completed with error (sct=0, sc=15) 00:14:54.220 [2024-10-13 17:25:02.598171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.598980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.220 [2024-10-13 17:25:02.599866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.599899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.599929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.599963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.599994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.600976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.601898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.602977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.221 [2024-10-13 17:25:02.603597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.603935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.604985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.605976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.222 [2024-10-13 17:25:02.606454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.606970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.607978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.608985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.223 [2024-10-13 17:25:02.609661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.609993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.610971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.611968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.224 [2024-10-13 17:25:02.612613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.612972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.613992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.614982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.225 [2024-10-13 17:25:02.615679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.615992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.616786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.617982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.618971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.226 [2024-10-13 17:25:02.619388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.619973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.620975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.621978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.227 [2024-10-13 17:25:02.622518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.622950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.623981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.624952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.625950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.625984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.626015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.626047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.228 [2024-10-13 17:25:02.626085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.626978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.627983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.229 [2024-10-13 17:25:02.628889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.628918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.628951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.628984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.629722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.630972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.631980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.230 [2024-10-13 17:25:02.632357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.231 [2024-10-13 17:25:02.632507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.632985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.633982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.634998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.231 [2024-10-13 17:25:02.635614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.635990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.636994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.637981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.232 [2024-10-13 17:25:02.638704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.638992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.639992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.640987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.233 [2024-10-13 17:25:02.641378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.641978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.642990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.643989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.234 [2024-10-13 17:25:02.644664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.644971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.645880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.646991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.235 [2024-10-13 17:25:02.647865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.648988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.649842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.650992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.236 [2024-10-13 17:25:02.651533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.651999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.652999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.237 [2024-10-13 17:25:02.653686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.653977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.654973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.655977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.238 [2024-10-13 17:25:02.656683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.656998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.657997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.658999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.239 [2024-10-13 17:25:02.659686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.659979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.660988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.661990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.662994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.663023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.663056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.663087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.663111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.663142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.240 [2024-10-13 17:25:02.663171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.663992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.664997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.241 [2024-10-13 17:25:02.665321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.665948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.241 [2024-10-13 17:25:02.666191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.666983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.667996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.242 [2024-10-13 17:25:02.668335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.668990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.669967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.529 [2024-10-13 17:25:02.670214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.670999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.671999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.672764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.530 [2024-10-13 17:25:02.673576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.673977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.674971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.675978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.676995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.677024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.677054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.677086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.677116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.531 [2024-10-13 17:25:02.677149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.677970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.678987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.679986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.532 [2024-10-13 17:25:02.680435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.680973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.681999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.533 [2024-10-13 17:25:02.682889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.682925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.682960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.682990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.683994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.684990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.685978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.534 [2024-10-13 17:25:02.686013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.686997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.687867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.688999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.689024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.689058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.689092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.535 [2024-10-13 17:25:02.689117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.689994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.690971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.691964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.536 [2024-10-13 17:25:02.692614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.692991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.693967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.694978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.537 [2024-10-13 17:25:02.695723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.695987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.696981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.697997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.538 [2024-10-13 17:25:02.698382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.698971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.539 [2024-10-13 17:25:02.699303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.699979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.700771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.539 [2024-10-13 17:25:02.701616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.701982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.702982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.703999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.704987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.705020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.540 [2024-10-13 17:25:02.705050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.705982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.706984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.707976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.541 [2024-10-13 17:25:02.708161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.708973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.709002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.709027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.709052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.709079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.709104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.709128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.710977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.542 [2024-10-13 17:25:02.711957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.711989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.712970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.713899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.543 [2024-10-13 17:25:02.714538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.714996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.715993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.716996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.717967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.718000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.718028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.718116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.544 [2024-10-13 17:25:02.718145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.718980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 true 00:14:54.545 [2024-10-13 17:25:02.719791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.719935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.720975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.545 [2024-10-13 17:25:02.721371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.721978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.722978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.723990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.546 [2024-10-13 17:25:02.724562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.724979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.725978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.726988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.547 [2024-10-13 17:25:02.727180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.727995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.728814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.729972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.548 [2024-10-13 17:25:02.730358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.730991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.731991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.732998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.549 [2024-10-13 17:25:02.733168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.549 [2024-10-13 17:25:02.733462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.733953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.734956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.735998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.550 [2024-10-13 17:25:02.736912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.736938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.736971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.737987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.738992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.551 [2024-10-13 17:25:02.739823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.739847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.739870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.739903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.739933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.739967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.740596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.741998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.742979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.743002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.743026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.743052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.743079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.552 [2024-10-13 17:25:02.743103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.743970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.744999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 17:25:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:54.553 [2024-10-13 17:25:02.745523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 17:25:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.553 [2024-10-13 17:25:02.745884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.745947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.553 [2024-10-13 17:25:02.746329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.746974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.747997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.748698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.554 [2024-10-13 17:25:02.749879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.749907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.749939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.749971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.749997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.750986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.751991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.752980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.753010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.555 [2024-10-13 17:25:02.753037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.753982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.754900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.556 [2024-10-13 17:25:02.755729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.755987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.756985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.757991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.557 [2024-10-13 17:25:02.758430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.758841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.759891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.759926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.759959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.759989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.760970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.558 [2024-10-13 17:25:02.761658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.761996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.762987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.763802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.559 [2024-10-13 17:25:02.764621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.764992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.765982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.560 [2024-10-13 17:25:02.766845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.766987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.560 [2024-10-13 17:25:02.767460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.767989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.768986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.769897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.561 [2024-10-13 17:25:02.770563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.770998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.771781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.772530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.773979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.562 [2024-10-13 17:25:02.774002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.774991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.775986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.776984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.563 [2024-10-13 17:25:02.777390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.777998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.778976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.779992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.564 [2024-10-13 17:25:02.780442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.780700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.781998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.782918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.565 [2024-10-13 17:25:02.783935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.783963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.783996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.784885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.785992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.566 [2024-10-13 17:25:02.786868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.786892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.786915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.786938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.786968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.786998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.787981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.788991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.789993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.567 [2024-10-13 17:25:02.790608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.790988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.791971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.792978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.568 [2024-10-13 17:25:02.793997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.794977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.795976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.796980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.797013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.797047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.569 [2024-10-13 17:25:02.797079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.797999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.798963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.799988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.570 [2024-10-13 17:25:02.800130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.570 [2024-10-13 17:25:02.800352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.800996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.801995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.802972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.803986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.571 [2024-10-13 17:25:02.804019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.804967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.805904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.806998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.572 [2024-10-13 17:25:02.807441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.807989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.808991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.573 [2024-10-13 17:25:02.809464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.809786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.810984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.811980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.574 [2024-10-13 17:25:02.812794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.812988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.813995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.814981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.575 [2024-10-13 17:25:02.815314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.815990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.816842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.817973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.818991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.576 [2024-10-13 17:25:02.819025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.819978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.820995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.821970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.577 [2024-10-13 17:25:02.822230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.822970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.823979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.578 [2024-10-13 17:25:02.824822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.824866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.824898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.824956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.824985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.825970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.826981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.579 [2024-10-13 17:25:02.827702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.827985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.828975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.829995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.580 [2024-10-13 17:25:02.830966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.830997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.831984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.832988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.833994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.834017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.834041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.581 [2024-10-13 17:25:02.834070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:54.582 [2024-10-13 17:25:02.834704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.834978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.835968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.582 [2024-10-13 17:25:02.836865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.836895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.836926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.836962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.836991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.837837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.838979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.583 [2024-10-13 17:25:02.839997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.840910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.841980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.584 [2024-10-13 17:25:02.842435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.842982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.843977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.844982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.585 [2024-10-13 17:25:02.845281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.845967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.846994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.847844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.586 [2024-10-13 17:25:02.848559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.848995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.849975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.850974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.587 [2024-10-13 17:25:02.851887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.851916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.851943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.851967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.851991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.852978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.853998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.854991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.588 [2024-10-13 17:25:02.855353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.855979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.856998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.589 [2024-10-13 17:25:02.857839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.857863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.857892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.857927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.857957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.857988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.858998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.859994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.590 [2024-10-13 17:25:02.860853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.860876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.860900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.860924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.860949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.860973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.860997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.861989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.862989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.863985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.591 [2024-10-13 17:25:02.864588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.864978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.865982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 Message suppressed 999 times: [2024-10-13 17:25:02.866133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 Read completed with error (sct=0, sc=15) 00:14:54.592 [2024-10-13 17:25:02.866159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.866982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.592 [2024-10-13 17:25:02.867679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.867995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.868968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.869979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.593 [2024-10-13 17:25:02.870727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.870995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.871973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.872979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.873977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.874009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.874038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.874069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.874092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.874115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.594 [2024-10-13 17:25:02.874139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.874991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.875997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.876997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.595 [2024-10-13 17:25:02.877675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.877698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.877880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.877907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.877939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.877972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.877999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.878981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 [2024-10-13 17:25:02.879526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:54.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.596 17:25:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.857 17:25:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:54.857 17:25:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:54.857 true 00:14:54.857 17:25:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:54.857 17:25:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.893 17:25:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.893 17:25:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:55.893 17:25:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:56.201 true 00:14:56.201 17:25:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:56.201 17:25:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.201 17:25:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.461 17:25:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:56.461 17:25:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:56.461 true 00:14:56.461 17:25:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:56.461 17:25:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 17:25:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.845 17:25:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:57.845 17:25:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:58.106 true 00:14:58.106 17:25:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:58.106 17:25:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.049 17:25:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.049 17:25:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:59.049 17:25:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:59.310 true 00:14:59.310 17:25:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:14:59.310 17:25:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.254 17:25:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.254 17:25:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:00.254 17:25:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:00.515 true 00:15:00.515 17:25:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:00.515 17:25:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.515 17:25:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.777 17:25:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:00.777 17:25:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:01.037 true 00:15:01.037 17:25:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:01.037 17:25:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.037 17:25:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:01.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:01.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:01.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:01.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:01.298 17:25:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:01.298 17:25:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:01.560 true 00:15:01.560 17:25:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:01.560 17:25:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:02.501 17:25:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:02.501 17:25:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:02.501 17:25:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:02.501 true 00:15:02.761 17:25:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:02.761 17:25:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.761 17:25:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.022 17:25:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:03.022 17:25:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:03.022 true 00:15:03.281 17:25:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:03.281 17:25:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.281 17:25:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.542 17:25:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:03.542 17:25:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:03.542 true 00:15:03.542 17:25:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:03.542 17:25:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.803 17:25:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.065 17:25:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:04.065 17:25:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:04.065 true 00:15:04.065 17:25:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:04.065 17:25:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.326 17:25:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.587 17:25:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:04.587 17:25:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:04.587 true 00:15:04.587 17:25:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:04.587 17:25:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.848 17:25:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.109 17:25:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:05.109 17:25:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:05.109 true 00:15:05.109 17:25:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:05.109 17:25:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.370 17:25:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.631 17:25:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:05.631 17:25:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:05.631 true 00:15:05.631 17:25:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:05.631 17:25:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.892 17:25:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.153 17:25:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:15:06.153 17:25:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:15:06.153 true 00:15:06.153 17:25:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:06.153 17:25:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.413 17:25:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.673 17:25:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:15:06.673 17:25:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:15:06.673 true 00:15:06.673 17:25:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:06.673 17:25:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.933 17:25:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.194 17:25:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:15:07.194 17:25:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:07.194 true 00:15:07.194 17:25:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:07.194 17:25:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.454 17:25:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.715 17:25:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:07.715 17:25:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:07.715 true 00:15:07.715 17:25:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:07.715 17:25:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.976 17:25:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.237 17:25:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:08.237 17:25:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:08.237 true 00:15:08.237 17:25:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:08.237 17:25:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.497 17:25:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.758 17:25:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:15:08.758 17:25:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:15:08.758 true 00:15:08.758 17:25:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:08.758 17:25:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.020 17:25:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.281 17:25:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:15:09.281 17:25:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:15:09.281 true 00:15:09.281 17:25:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:09.281 17:25:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.542 17:25:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.801 17:25:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:15:09.801 17:25:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:15:09.801 true 00:15:09.801 17:25:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:09.801 17:25:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.061 17:25:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.321 17:25:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:15:10.321 17:25:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:15:10.321 true 00:15:10.321 17:25:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:10.321 17:25:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.582 17:25:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.842 17:25:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:15:10.842 17:25:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:15:10.842 true 00:15:10.842 17:25:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:10.842 17:25:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.102 17:25:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.363 17:25:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:15:11.363 17:25:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:15:11.363 true 00:15:11.363 17:25:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:11.363 17:25:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.623 17:25:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.883 17:25:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:15:11.884 17:25:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:15:11.884 true 00:15:11.884 17:25:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:11.884 17:25:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.144 17:25:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.404 17:25:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:15:12.404 17:25:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:15:12.404 true 00:15:12.404 17:25:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:12.404 17:25:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.664 17:25:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.925 17:25:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:15:12.925 17:25:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:15:12.925 true 00:15:12.925 17:25:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:12.925 17:25:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.185 17:25:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.445 17:25:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:15:13.445 17:25:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:15:13.445 true 00:15:13.445 17:25:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:13.445 17:25:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.705 17:25:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.965 17:25:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:15:13.965 17:25:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:15:13.965 true 00:15:13.965 17:25:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:13.965 17:25:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.224 17:25:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.485 17:25:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:15:14.485 17:25:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:15:14.485 true 00:15:14.485 17:25:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:14.485 17:25:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.746 17:25:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.007 17:25:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:15:15.007 17:25:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:15:15.007 true 00:15:15.007 17:25:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:15.007 17:25:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.267 17:25:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.527 17:25:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:15:15.527 17:25:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:15:15.527 true 00:15:15.527 17:25:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:15.527 17:25:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.788 17:25:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.048 17:25:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:15:16.048 17:25:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:15:16.048 true 00:15:16.048 17:25:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:16.048 17:25:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.310 17:25:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.310 Initializing NVMe Controllers 00:15:16.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.310 Controller IO queue size 128, less than required. 00:15:16.310 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.310 Controller IO queue size 128, less than required. 00:15:16.310 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:16.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:16.310 Initialization complete. Launching workers. 00:15:16.310 ======================================================== 00:15:16.310 Latency(us) 00:15:16.310 Device Information : IOPS MiB/s Average min max 00:15:16.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1828.79 0.89 16113.47 1488.98 1269980.66 00:15:16.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7064.16 3.45 18060.26 1434.05 397661.86 00:15:16.310 ======================================================== 00:15:16.310 Total : 8892.95 4.34 17659.91 1434.05 1269980.66 00:15:16.310 00:15:16.572 17:25:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:15:16.572 17:25:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:15:16.572 true 00:15:16.572 17:25:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3109732 00:15:16.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3109732) - No such process 00:15:16.572 17:25:25 -- target/ns_hotplug_stress.sh@53 -- # wait 3109732 00:15:16.572 17:25:25 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.833 17:25:25 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:17.093 null0 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.093 17:25:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:17.353 null1 00:15:17.353 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.353 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.353 17:25:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:17.613 null2 00:15:17.613 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.613 17:25:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.613 17:25:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:17.613 null3 00:15:17.613 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.613 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.613 17:25:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:17.874 null4 00:15:17.874 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:17.874 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:17.874 17:25:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:18.135 null5 00:15:18.135 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.135 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.135 17:25:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:18.135 null6 00:15:18.135 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.135 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.135 17:25:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:18.395 null7 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.395 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@66 -- # wait 3116292 3116295 3116297 3116300 3116302 3116305 3116308 3116311 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.396 17:25:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:18.655 17:25:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.655 17:25:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.655 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:18.916 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:19.176 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.437 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:19.698 17:25:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.698 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.959 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.221 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:20.482 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.482 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:20.482 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.482 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.482 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.483 17:25:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:20.483 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.483 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.483 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.742 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.743 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.003 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:21.263 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:21.263 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:21.263 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:21.264 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.525 17:25:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:21.525 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:21.525 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:21.786 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.787 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:22.048 17:25:30 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:22.048 17:25:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:22.048 17:25:30 -- nvmf/common.sh@116 -- # sync 00:15:22.048 17:25:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:22.048 17:25:30 -- nvmf/common.sh@119 -- # set +e 00:15:22.048 17:25:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:22.048 17:25:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:22.048 rmmod nvme_tcp 00:15:22.048 rmmod nvme_fabrics 00:15:22.308 rmmod nvme_keyring 00:15:22.308 17:25:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:22.308 17:25:30 -- nvmf/common.sh@123 -- # set -e 00:15:22.308 17:25:30 -- nvmf/common.sh@124 -- # return 0 00:15:22.308 17:25:30 -- nvmf/common.sh@477 -- # '[' -n 3109122 ']' 00:15:22.308 17:25:30 -- nvmf/common.sh@478 -- # killprocess 3109122 00:15:22.308 17:25:30 -- common/autotest_common.sh@926 -- # '[' -z 3109122 ']' 00:15:22.308 17:25:30 -- common/autotest_common.sh@930 -- # kill -0 3109122 00:15:22.308 17:25:30 -- common/autotest_common.sh@931 -- # uname 00:15:22.308 17:25:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:22.308 17:25:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3109122 00:15:22.308 17:25:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:22.308 17:25:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:22.308 17:25:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3109122' 00:15:22.308 killing process with pid 3109122 00:15:22.308 17:25:30 -- common/autotest_common.sh@945 -- # kill 3109122 00:15:22.308 17:25:30 -- common/autotest_common.sh@950 -- # wait 3109122 00:15:22.308 17:25:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:22.308 17:25:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:22.308 17:25:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:22.308 17:25:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.308 17:25:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:22.308 17:25:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.308 17:25:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.308 17:25:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.852 17:25:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:24.852 00:15:24.852 real 0m48.499s 00:15:24.852 user 3m15.370s 00:15:24.852 sys 0m15.709s 00:15:24.852 17:25:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.852 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.852 ************************************ 00:15:24.852 END TEST nvmf_ns_hotplug_stress 00:15:24.852 ************************************ 00:15:24.852 17:25:32 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:24.852 17:25:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:24.852 17:25:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:24.852 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.852 ************************************ 00:15:24.852 START TEST nvmf_connect_stress 00:15:24.852 ************************************ 00:15:24.852 17:25:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:24.852 * Looking for test storage... 00:15:24.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.852 17:25:32 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.852 17:25:32 -- nvmf/common.sh@7 -- # uname -s 00:15:24.852 17:25:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.852 17:25:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.852 17:25:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.852 17:25:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.852 17:25:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.852 17:25:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.852 17:25:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.852 17:25:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.852 17:25:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.852 17:25:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.852 17:25:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:24.852 17:25:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:24.852 17:25:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.852 17:25:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.852 17:25:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.852 17:25:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.852 17:25:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.852 17:25:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.852 17:25:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.852 17:25:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.852 17:25:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.852 17:25:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.852 17:25:33 -- paths/export.sh@5 -- # export PATH 00:15:24.852 17:25:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.852 17:25:33 -- nvmf/common.sh@46 -- # : 0 00:15:24.852 17:25:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:24.852 17:25:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:24.852 17:25:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:24.852 17:25:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.852 17:25:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.852 17:25:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:24.852 17:25:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:24.852 17:25:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:24.852 17:25:33 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:24.852 17:25:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:24.852 17:25:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.852 17:25:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:24.852 17:25:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:24.852 17:25:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:24.852 17:25:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.852 17:25:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.852 17:25:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.852 17:25:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:24.852 17:25:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:24.852 17:25:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:24.852 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:15:31.436 17:25:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:31.436 17:25:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:31.436 17:25:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:31.436 17:25:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:31.436 17:25:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:31.436 17:25:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:31.436 17:25:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:31.436 17:25:39 -- nvmf/common.sh@294 -- # net_devs=() 00:15:31.436 17:25:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:31.436 17:25:39 -- nvmf/common.sh@295 -- # e810=() 00:15:31.436 17:25:39 -- nvmf/common.sh@295 -- # local -ga e810 00:15:31.436 17:25:39 -- nvmf/common.sh@296 -- # x722=() 00:15:31.436 17:25:39 -- nvmf/common.sh@296 -- # local -ga x722 00:15:31.436 17:25:39 -- nvmf/common.sh@297 -- # mlx=() 00:15:31.436 17:25:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:31.436 17:25:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.436 17:25:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:31.436 17:25:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:31.436 17:25:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:31.436 17:25:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:31.436 17:25:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:31.437 17:25:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:31.437 17:25:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.437 17:25:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:31.437 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:31.437 17:25:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.437 17:25:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:31.437 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:31.437 17:25:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:31.437 17:25:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.437 17:25:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.437 17:25:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.437 17:25:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.437 17:25:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:31.437 Found net devices under 0000:31:00.0: cvl_0_0 00:15:31.437 17:25:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.437 17:25:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.437 17:25:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.437 17:25:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.437 17:25:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.437 17:25:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:31.437 Found net devices under 0000:31:00.1: cvl_0_1 00:15:31.437 17:25:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.437 17:25:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:31.437 17:25:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:31.437 17:25:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:31.437 17:25:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:31.437 17:25:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.437 17:25:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.437 17:25:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.437 17:25:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:31.437 17:25:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.437 17:25:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.437 17:25:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:31.437 17:25:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.437 17:25:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.437 17:25:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:31.437 17:25:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:31.437 17:25:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.437 17:25:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.437 17:25:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.698 17:25:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.698 17:25:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:31.698 17:25:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.698 17:25:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.698 17:25:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.698 17:25:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:31.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:15:31.698 00:15:31.698 --- 10.0.0.2 ping statistics --- 00:15:31.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.698 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:15:31.698 17:25:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:15:31.698 00:15:31.698 --- 10.0.0.1 ping statistics --- 00:15:31.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.698 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:15:31.698 17:25:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.698 17:25:40 -- nvmf/common.sh@410 -- # return 0 00:15:31.698 17:25:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.698 17:25:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.698 17:25:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.698 17:25:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.698 17:25:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.698 17:25:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.698 17:25:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.698 17:25:40 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:31.698 17:25:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.698 17:25:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:31.698 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:15:31.698 17:25:40 -- nvmf/common.sh@469 -- # nvmfpid=3121325 00:15:31.698 17:25:40 -- nvmf/common.sh@470 -- # waitforlisten 3121325 00:15:31.698 17:25:40 -- common/autotest_common.sh@819 -- # '[' -z 3121325 ']' 00:15:31.698 17:25:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.698 17:25:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:31.698 17:25:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.698 17:25:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:31.698 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:15:31.698 17:25:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:31.698 [2024-10-13 17:25:40.217547] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:31.698 [2024-10-13 17:25:40.217612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.958 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.958 [2024-10-13 17:25:40.307389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:31.958 [2024-10-13 17:25:40.354043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.958 [2024-10-13 17:25:40.354206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.958 [2024-10-13 17:25:40.354217] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.958 [2024-10-13 17:25:40.354226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.958 [2024-10-13 17:25:40.354405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.959 [2024-10-13 17:25:40.354650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.959 [2024-10-13 17:25:40.354650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.529 17:25:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:32.529 17:25:40 -- common/autotest_common.sh@852 -- # return 0 00:15:32.529 17:25:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.529 17:25:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:32.529 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:15:32.529 17:25:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.529 17:25:41 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.529 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.529 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.529 [2024-10-13 17:25:41.038122] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.529 17:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.529 17:25:41 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:32.529 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.529 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.789 17:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.789 17:25:41 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.789 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.789 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.789 [2024-10-13 17:25:41.062515] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.789 17:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.789 17:25:41 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:32.789 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.789 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.789 NULL1 00:15:32.789 17:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.789 17:25:41 -- target/connect_stress.sh@21 -- # PERF_PID=3121676 00:15:32.789 17:25:41 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:32.789 17:25:41 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:32.789 17:25:41 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:32.789 17:25:41 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:32.789 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.789 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.789 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.789 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.789 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.789 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.789 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.789 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.789 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.790 17:25:41 -- target/connect_stress.sh@28 -- # cat 00:15:32.790 17:25:41 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:32.790 17:25:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.790 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.790 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:33.049 17:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.049 17:25:41 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:33.049 17:25:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.049 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.049 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:33.618 17:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.618 17:25:41 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:33.618 17:25:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.618 17:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.618 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:33.878 17:25:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.878 17:25:42 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:33.878 17:25:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.878 17:25:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.878 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:34.138 17:25:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.138 17:25:42 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:34.138 17:25:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.138 17:25:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.138 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:34.398 17:25:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.398 17:25:42 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:34.398 17:25:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.398 17:25:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.398 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:34.658 17:25:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.658 17:25:43 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:34.658 17:25:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.658 17:25:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.658 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:35.229 17:25:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.229 17:25:43 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:35.229 17:25:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.229 17:25:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.229 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:35.489 17:25:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.489 17:25:43 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:35.489 17:25:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.489 17:25:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.489 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:35.750 17:25:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.750 17:25:44 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:35.750 17:25:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.750 17:25:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.750 17:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 17:25:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.010 17:25:44 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:36.010 17:25:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.010 17:25:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.010 17:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.269 17:25:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.269 17:25:44 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:36.269 17:25:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.269 17:25:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.269 17:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.839 17:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.839 17:25:45 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:36.839 17:25:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.839 17:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.839 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:37.099 17:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.099 17:25:45 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:37.099 17:25:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.099 17:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.099 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:37.359 17:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.359 17:25:45 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:37.359 17:25:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.359 17:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.359 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:37.620 17:25:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.620 17:25:46 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:37.620 17:25:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.620 17:25:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.620 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:37.880 17:25:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.880 17:25:46 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:37.880 17:25:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.880 17:25:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.880 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:38.451 17:25:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.451 17:25:46 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:38.451 17:25:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.451 17:25:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.451 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:38.712 17:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.712 17:25:47 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:38.712 17:25:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.712 17:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.712 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:15:38.973 17:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.973 17:25:47 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:38.973 17:25:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.973 17:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.973 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:15:39.233 17:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.233 17:25:47 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:39.233 17:25:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.233 17:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.233 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:15:39.493 17:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.753 17:25:48 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:39.753 17:25:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.753 17:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.753 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.013 17:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.013 17:25:48 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:40.013 17:25:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.013 17:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.013 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.273 17:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.273 17:25:48 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:40.273 17:25:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.273 17:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.273 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.533 17:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.533 17:25:49 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:40.533 17:25:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.533 17:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.533 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:15:41.104 17:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.104 17:25:49 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:41.104 17:25:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.104 17:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.104 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:15:41.364 17:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.364 17:25:49 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:41.364 17:25:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.364 17:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.364 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:15:41.625 17:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.625 17:25:49 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:41.625 17:25:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.625 17:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.625 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:15:41.885 17:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.885 17:25:50 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:41.885 17:25:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.885 17:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.885 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:42.146 17:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.146 17:25:50 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:42.146 17:25:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.146 17:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.146 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:42.716 17:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.716 17:25:50 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:42.716 17:25:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.716 17:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.716 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:42.716 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:43.024 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.024 17:25:51 -- target/connect_stress.sh@34 -- # kill -0 3121676 00:15:43.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3121676) - No such process 00:15:43.024 17:25:51 -- target/connect_stress.sh@38 -- # wait 3121676 00:15:43.024 17:25:51 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:43.024 17:25:51 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:43.024 17:25:51 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:43.024 17:25:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:43.024 17:25:51 -- nvmf/common.sh@116 -- # sync 00:15:43.024 17:25:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:43.024 17:25:51 -- nvmf/common.sh@119 -- # set +e 00:15:43.024 17:25:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:43.024 17:25:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:43.024 rmmod nvme_tcp 00:15:43.024 rmmod nvme_fabrics 00:15:43.024 rmmod nvme_keyring 00:15:43.024 17:25:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:43.024 17:25:51 -- nvmf/common.sh@123 -- # set -e 00:15:43.024 17:25:51 -- nvmf/common.sh@124 -- # return 0 00:15:43.024 17:25:51 -- nvmf/common.sh@477 -- # '[' -n 3121325 ']' 00:15:43.024 17:25:51 -- nvmf/common.sh@478 -- # killprocess 3121325 00:15:43.024 17:25:51 -- common/autotest_common.sh@926 -- # '[' -z 3121325 ']' 00:15:43.024 17:25:51 -- common/autotest_common.sh@930 -- # kill -0 3121325 00:15:43.024 17:25:51 -- common/autotest_common.sh@931 -- # uname 00:15:43.024 17:25:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:43.024 17:25:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3121325 00:15:43.024 17:25:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:43.024 17:25:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:43.024 17:25:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3121325' 00:15:43.024 killing process with pid 3121325 00:15:43.024 17:25:51 -- common/autotest_common.sh@945 -- # kill 3121325 00:15:43.024 17:25:51 -- common/autotest_common.sh@950 -- # wait 3121325 00:15:43.331 17:25:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:43.331 17:25:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:43.331 17:25:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:43.331 17:25:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.331 17:25:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:43.331 17:25:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.331 17:25:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.331 17:25:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.252 17:25:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:45.252 00:15:45.252 real 0m20.720s 00:15:45.252 user 0m42.129s 00:15:45.252 sys 0m8.634s 00:15:45.252 17:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.252 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:15:45.252 ************************************ 00:15:45.252 END TEST nvmf_connect_stress 00:15:45.252 ************************************ 00:15:45.252 17:25:53 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:45.252 17:25:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:45.252 17:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.252 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:15:45.252 ************************************ 00:15:45.252 START TEST nvmf_fused_ordering 00:15:45.252 ************************************ 00:15:45.252 17:25:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:45.252 * Looking for test storage... 00:15:45.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.252 17:25:53 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.252 17:25:53 -- nvmf/common.sh@7 -- # uname -s 00:15:45.252 17:25:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.252 17:25:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.252 17:25:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.252 17:25:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.252 17:25:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.252 17:25:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.252 17:25:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.252 17:25:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.252 17:25:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.252 17:25:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.252 17:25:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:45.252 17:25:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:45.252 17:25:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.252 17:25:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.252 17:25:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.252 17:25:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.252 17:25:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.252 17:25:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.252 17:25:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.252 17:25:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.252 17:25:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.252 17:25:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.252 17:25:53 -- paths/export.sh@5 -- # export PATH 00:15:45.252 17:25:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.252 17:25:53 -- nvmf/common.sh@46 -- # : 0 00:15:45.252 17:25:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:45.252 17:25:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:45.252 17:25:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:45.252 17:25:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.252 17:25:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.252 17:25:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:45.252 17:25:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:45.252 17:25:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:45.252 17:25:53 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:45.252 17:25:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:45.252 17:25:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.252 17:25:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:45.252 17:25:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:45.252 17:25:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:45.252 17:25:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.252 17:25:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.252 17:25:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.512 17:25:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:45.512 17:25:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:45.512 17:25:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:45.512 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:15:53.652 17:26:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:53.652 17:26:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:53.652 17:26:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:53.652 17:26:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:53.652 17:26:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:53.652 17:26:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:53.652 17:26:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:53.652 17:26:00 -- nvmf/common.sh@294 -- # net_devs=() 00:15:53.652 17:26:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:53.652 17:26:00 -- nvmf/common.sh@295 -- # e810=() 00:15:53.652 17:26:00 -- nvmf/common.sh@295 -- # local -ga e810 00:15:53.652 17:26:00 -- nvmf/common.sh@296 -- # x722=() 00:15:53.652 17:26:00 -- nvmf/common.sh@296 -- # local -ga x722 00:15:53.652 17:26:00 -- nvmf/common.sh@297 -- # mlx=() 00:15:53.652 17:26:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:53.652 17:26:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.652 17:26:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:53.652 17:26:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:53.652 17:26:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:53.652 17:26:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:53.652 17:26:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:53.652 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:53.652 17:26:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:53.652 17:26:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:53.652 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:53.652 17:26:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:53.652 17:26:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:53.652 17:26:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:53.652 17:26:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.652 17:26:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:53.652 17:26:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.652 17:26:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:53.652 Found net devices under 0000:31:00.0: cvl_0_0 00:15:53.652 17:26:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.652 17:26:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:53.652 17:26:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.652 17:26:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:53.652 17:26:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.652 17:26:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:53.652 Found net devices under 0000:31:00.1: cvl_0_1 00:15:53.652 17:26:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.653 17:26:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:53.653 17:26:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:53.653 17:26:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:53.653 17:26:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:53.653 17:26:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:53.653 17:26:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.653 17:26:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.653 17:26:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.653 17:26:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:53.653 17:26:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.653 17:26:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.653 17:26:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:53.653 17:26:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.653 17:26:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.653 17:26:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:53.653 17:26:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:53.653 17:26:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.653 17:26:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.653 17:26:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.653 17:26:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.653 17:26:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:53.653 17:26:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.653 17:26:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.653 17:26:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.653 17:26:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:53.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:15:53.653 00:15:53.653 --- 10.0.0.2 ping statistics --- 00:15:53.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.653 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:15:53.653 17:26:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:15:53.653 00:15:53.653 --- 10.0.0.1 ping statistics --- 00:15:53.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.653 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:53.653 17:26:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.653 17:26:01 -- nvmf/common.sh@410 -- # return 0 00:15:53.653 17:26:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:53.653 17:26:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.653 17:26:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:53.653 17:26:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:53.653 17:26:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.653 17:26:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:53.653 17:26:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:53.653 17:26:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:53.653 17:26:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:53.653 17:26:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:53.653 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 17:26:01 -- nvmf/common.sh@469 -- # nvmfpid=3127818 00:15:53.653 17:26:01 -- nvmf/common.sh@470 -- # waitforlisten 3127818 00:15:53.653 17:26:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.653 17:26:01 -- common/autotest_common.sh@819 -- # '[' -z 3127818 ']' 00:15:53.653 17:26:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.653 17:26:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:53.653 17:26:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.653 17:26:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:53.653 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 [2024-10-13 17:26:01.153114] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:53.653 [2024-10-13 17:26:01.153162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.653 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.653 [2024-10-13 17:26:01.236936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.653 [2024-10-13 17:26:01.265683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.653 [2024-10-13 17:26:01.265804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.653 [2024-10-13 17:26:01.265812] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.653 [2024-10-13 17:26:01.265820] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.653 [2024-10-13 17:26:01.265840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.653 17:26:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.653 17:26:01 -- common/autotest_common.sh@852 -- # return 0 00:15:53.653 17:26:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.653 17:26:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:53.653 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 17:26:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.653 17:26:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.653 17:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.653 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 [2024-10-13 17:26:02.034189] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.653 17:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.653 17:26:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:53.653 17:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.653 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 17:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.653 17:26:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.653 17:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.653 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 [2024-10-13 17:26:02.058499] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.653 17:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.653 17:26:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:53.653 17:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.653 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 NULL1 00:15:53.653 17:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.653 17:26:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:53.653 17:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.653 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 17:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.653 17:26:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:53.653 17:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.653 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 17:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.653 17:26:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:53.653 [2024-10-13 17:26:02.126292] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:53.653 [2024-10-13 17:26:02.126341] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128146 ] 00:15:53.653 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.225 Attached to nqn.2016-06.io.spdk:cnode1 00:15:54.225 Namespace ID: 1 size: 1GB 00:15:54.225 fused_ordering(0) 00:15:54.225 fused_ordering(1) 00:15:54.225 fused_ordering(2) 00:15:54.225 fused_ordering(3) 00:15:54.225 fused_ordering(4) 00:15:54.225 fused_ordering(5) 00:15:54.225 fused_ordering(6) 00:15:54.225 fused_ordering(7) 00:15:54.225 fused_ordering(8) 00:15:54.225 fused_ordering(9) 00:15:54.225 fused_ordering(10) 00:15:54.225 fused_ordering(11) 00:15:54.225 fused_ordering(12) 00:15:54.225 fused_ordering(13) 00:15:54.225 fused_ordering(14) 00:15:54.225 fused_ordering(15) 00:15:54.225 fused_ordering(16) 00:15:54.225 fused_ordering(17) 00:15:54.225 fused_ordering(18) 00:15:54.225 fused_ordering(19) 00:15:54.225 fused_ordering(20) 00:15:54.225 fused_ordering(21) 00:15:54.225 fused_ordering(22) 00:15:54.225 fused_ordering(23) 00:15:54.225 fused_ordering(24) 00:15:54.225 fused_ordering(25) 00:15:54.225 fused_ordering(26) 00:15:54.225 fused_ordering(27) 00:15:54.225 fused_ordering(28) 00:15:54.225 fused_ordering(29) 00:15:54.225 fused_ordering(30) 00:15:54.225 fused_ordering(31) 00:15:54.225 fused_ordering(32) 00:15:54.225 fused_ordering(33) 00:15:54.225 fused_ordering(34) 00:15:54.225 fused_ordering(35) 00:15:54.225 fused_ordering(36) 00:15:54.225 fused_ordering(37) 00:15:54.225 fused_ordering(38) 00:15:54.225 fused_ordering(39) 00:15:54.225 fused_ordering(40) 00:15:54.225 fused_ordering(41) 00:15:54.225 fused_ordering(42) 00:15:54.225 fused_ordering(43) 00:15:54.225 fused_ordering(44) 00:15:54.225 fused_ordering(45) 00:15:54.225 fused_ordering(46) 00:15:54.225 fused_ordering(47) 00:15:54.225 fused_ordering(48) 00:15:54.225 fused_ordering(49) 00:15:54.225 fused_ordering(50) 00:15:54.225 fused_ordering(51) 00:15:54.225 fused_ordering(52) 00:15:54.225 fused_ordering(53) 00:15:54.225 fused_ordering(54) 00:15:54.225 fused_ordering(55) 00:15:54.225 fused_ordering(56) 00:15:54.225 fused_ordering(57) 00:15:54.225 fused_ordering(58) 00:15:54.225 fused_ordering(59) 00:15:54.225 fused_ordering(60) 00:15:54.225 fused_ordering(61) 00:15:54.225 fused_ordering(62) 00:15:54.225 fused_ordering(63) 00:15:54.225 fused_ordering(64) 00:15:54.225 fused_ordering(65) 00:15:54.225 fused_ordering(66) 00:15:54.225 fused_ordering(67) 00:15:54.225 fused_ordering(68) 00:15:54.225 fused_ordering(69) 00:15:54.225 fused_ordering(70) 00:15:54.225 fused_ordering(71) 00:15:54.225 fused_ordering(72) 00:15:54.225 fused_ordering(73) 00:15:54.225 fused_ordering(74) 00:15:54.225 fused_ordering(75) 00:15:54.225 fused_ordering(76) 00:15:54.225 fused_ordering(77) 00:15:54.225 fused_ordering(78) 00:15:54.225 fused_ordering(79) 00:15:54.225 fused_ordering(80) 00:15:54.225 fused_ordering(81) 00:15:54.225 fused_ordering(82) 00:15:54.225 fused_ordering(83) 00:15:54.225 fused_ordering(84) 00:15:54.225 fused_ordering(85) 00:15:54.225 fused_ordering(86) 00:15:54.225 fused_ordering(87) 00:15:54.225 fused_ordering(88) 00:15:54.225 fused_ordering(89) 00:15:54.225 fused_ordering(90) 00:15:54.225 fused_ordering(91) 00:15:54.225 fused_ordering(92) 00:15:54.225 fused_ordering(93) 00:15:54.225 fused_ordering(94) 00:15:54.225 fused_ordering(95) 00:15:54.225 fused_ordering(96) 00:15:54.225 fused_ordering(97) 00:15:54.225 fused_ordering(98) 00:15:54.225 fused_ordering(99) 00:15:54.225 fused_ordering(100) 00:15:54.225 fused_ordering(101) 00:15:54.225 fused_ordering(102) 00:15:54.225 fused_ordering(103) 00:15:54.225 fused_ordering(104) 00:15:54.225 fused_ordering(105) 00:15:54.225 fused_ordering(106) 00:15:54.225 fused_ordering(107) 00:15:54.225 fused_ordering(108) 00:15:54.225 fused_ordering(109) 00:15:54.225 fused_ordering(110) 00:15:54.225 fused_ordering(111) 00:15:54.225 fused_ordering(112) 00:15:54.225 fused_ordering(113) 00:15:54.225 fused_ordering(114) 00:15:54.225 fused_ordering(115) 00:15:54.225 fused_ordering(116) 00:15:54.225 fused_ordering(117) 00:15:54.225 fused_ordering(118) 00:15:54.225 fused_ordering(119) 00:15:54.225 fused_ordering(120) 00:15:54.225 fused_ordering(121) 00:15:54.225 fused_ordering(122) 00:15:54.225 fused_ordering(123) 00:15:54.225 fused_ordering(124) 00:15:54.225 fused_ordering(125) 00:15:54.225 fused_ordering(126) 00:15:54.225 fused_ordering(127) 00:15:54.225 fused_ordering(128) 00:15:54.225 fused_ordering(129) 00:15:54.225 fused_ordering(130) 00:15:54.225 fused_ordering(131) 00:15:54.225 fused_ordering(132) 00:15:54.225 fused_ordering(133) 00:15:54.225 fused_ordering(134) 00:15:54.225 fused_ordering(135) 00:15:54.225 fused_ordering(136) 00:15:54.225 fused_ordering(137) 00:15:54.225 fused_ordering(138) 00:15:54.225 fused_ordering(139) 00:15:54.225 fused_ordering(140) 00:15:54.225 fused_ordering(141) 00:15:54.225 fused_ordering(142) 00:15:54.225 fused_ordering(143) 00:15:54.225 fused_ordering(144) 00:15:54.225 fused_ordering(145) 00:15:54.225 fused_ordering(146) 00:15:54.225 fused_ordering(147) 00:15:54.225 fused_ordering(148) 00:15:54.225 fused_ordering(149) 00:15:54.225 fused_ordering(150) 00:15:54.225 fused_ordering(151) 00:15:54.225 fused_ordering(152) 00:15:54.225 fused_ordering(153) 00:15:54.225 fused_ordering(154) 00:15:54.225 fused_ordering(155) 00:15:54.225 fused_ordering(156) 00:15:54.225 fused_ordering(157) 00:15:54.225 fused_ordering(158) 00:15:54.225 fused_ordering(159) 00:15:54.225 fused_ordering(160) 00:15:54.225 fused_ordering(161) 00:15:54.225 fused_ordering(162) 00:15:54.225 fused_ordering(163) 00:15:54.225 fused_ordering(164) 00:15:54.225 fused_ordering(165) 00:15:54.225 fused_ordering(166) 00:15:54.225 fused_ordering(167) 00:15:54.225 fused_ordering(168) 00:15:54.225 fused_ordering(169) 00:15:54.225 fused_ordering(170) 00:15:54.225 fused_ordering(171) 00:15:54.225 fused_ordering(172) 00:15:54.225 fused_ordering(173) 00:15:54.225 fused_ordering(174) 00:15:54.225 fused_ordering(175) 00:15:54.225 fused_ordering(176) 00:15:54.225 fused_ordering(177) 00:15:54.225 fused_ordering(178) 00:15:54.225 fused_ordering(179) 00:15:54.225 fused_ordering(180) 00:15:54.225 fused_ordering(181) 00:15:54.225 fused_ordering(182) 00:15:54.225 fused_ordering(183) 00:15:54.225 fused_ordering(184) 00:15:54.225 fused_ordering(185) 00:15:54.225 fused_ordering(186) 00:15:54.225 fused_ordering(187) 00:15:54.225 fused_ordering(188) 00:15:54.225 fused_ordering(189) 00:15:54.225 fused_ordering(190) 00:15:54.225 fused_ordering(191) 00:15:54.225 fused_ordering(192) 00:15:54.225 fused_ordering(193) 00:15:54.225 fused_ordering(194) 00:15:54.225 fused_ordering(195) 00:15:54.225 fused_ordering(196) 00:15:54.225 fused_ordering(197) 00:15:54.225 fused_ordering(198) 00:15:54.225 fused_ordering(199) 00:15:54.225 fused_ordering(200) 00:15:54.225 fused_ordering(201) 00:15:54.225 fused_ordering(202) 00:15:54.225 fused_ordering(203) 00:15:54.225 fused_ordering(204) 00:15:54.225 fused_ordering(205) 00:15:54.486 fused_ordering(206) 00:15:54.486 fused_ordering(207) 00:15:54.486 fused_ordering(208) 00:15:54.486 fused_ordering(209) 00:15:54.486 fused_ordering(210) 00:15:54.486 fused_ordering(211) 00:15:54.486 fused_ordering(212) 00:15:54.486 fused_ordering(213) 00:15:54.486 fused_ordering(214) 00:15:54.486 fused_ordering(215) 00:15:54.486 fused_ordering(216) 00:15:54.486 fused_ordering(217) 00:15:54.486 fused_ordering(218) 00:15:54.486 fused_ordering(219) 00:15:54.486 fused_ordering(220) 00:15:54.486 fused_ordering(221) 00:15:54.486 fused_ordering(222) 00:15:54.486 fused_ordering(223) 00:15:54.486 fused_ordering(224) 00:15:54.486 fused_ordering(225) 00:15:54.486 fused_ordering(226) 00:15:54.486 fused_ordering(227) 00:15:54.486 fused_ordering(228) 00:15:54.486 fused_ordering(229) 00:15:54.486 fused_ordering(230) 00:15:54.486 fused_ordering(231) 00:15:54.486 fused_ordering(232) 00:15:54.486 fused_ordering(233) 00:15:54.486 fused_ordering(234) 00:15:54.486 fused_ordering(235) 00:15:54.486 fused_ordering(236) 00:15:54.486 fused_ordering(237) 00:15:54.486 fused_ordering(238) 00:15:54.486 fused_ordering(239) 00:15:54.486 fused_ordering(240) 00:15:54.486 fused_ordering(241) 00:15:54.486 fused_ordering(242) 00:15:54.486 fused_ordering(243) 00:15:54.486 fused_ordering(244) 00:15:54.486 fused_ordering(245) 00:15:54.486 fused_ordering(246) 00:15:54.486 fused_ordering(247) 00:15:54.486 fused_ordering(248) 00:15:54.486 fused_ordering(249) 00:15:54.486 fused_ordering(250) 00:15:54.486 fused_ordering(251) 00:15:54.486 fused_ordering(252) 00:15:54.486 fused_ordering(253) 00:15:54.486 fused_ordering(254) 00:15:54.486 fused_ordering(255) 00:15:54.486 fused_ordering(256) 00:15:54.486 fused_ordering(257) 00:15:54.486 fused_ordering(258) 00:15:54.486 fused_ordering(259) 00:15:54.486 fused_ordering(260) 00:15:54.486 fused_ordering(261) 00:15:54.486 fused_ordering(262) 00:15:54.486 fused_ordering(263) 00:15:54.486 fused_ordering(264) 00:15:54.486 fused_ordering(265) 00:15:54.486 fused_ordering(266) 00:15:54.486 fused_ordering(267) 00:15:54.486 fused_ordering(268) 00:15:54.486 fused_ordering(269) 00:15:54.486 fused_ordering(270) 00:15:54.486 fused_ordering(271) 00:15:54.486 fused_ordering(272) 00:15:54.486 fused_ordering(273) 00:15:54.486 fused_ordering(274) 00:15:54.486 fused_ordering(275) 00:15:54.486 fused_ordering(276) 00:15:54.486 fused_ordering(277) 00:15:54.486 fused_ordering(278) 00:15:54.486 fused_ordering(279) 00:15:54.486 fused_ordering(280) 00:15:54.486 fused_ordering(281) 00:15:54.486 fused_ordering(282) 00:15:54.486 fused_ordering(283) 00:15:54.486 fused_ordering(284) 00:15:54.486 fused_ordering(285) 00:15:54.486 fused_ordering(286) 00:15:54.486 fused_ordering(287) 00:15:54.486 fused_ordering(288) 00:15:54.486 fused_ordering(289) 00:15:54.486 fused_ordering(290) 00:15:54.486 fused_ordering(291) 00:15:54.486 fused_ordering(292) 00:15:54.486 fused_ordering(293) 00:15:54.486 fused_ordering(294) 00:15:54.486 fused_ordering(295) 00:15:54.486 fused_ordering(296) 00:15:54.486 fused_ordering(297) 00:15:54.486 fused_ordering(298) 00:15:54.486 fused_ordering(299) 00:15:54.486 fused_ordering(300) 00:15:54.486 fused_ordering(301) 00:15:54.486 fused_ordering(302) 00:15:54.486 fused_ordering(303) 00:15:54.486 fused_ordering(304) 00:15:54.486 fused_ordering(305) 00:15:54.486 fused_ordering(306) 00:15:54.486 fused_ordering(307) 00:15:54.486 fused_ordering(308) 00:15:54.486 fused_ordering(309) 00:15:54.486 fused_ordering(310) 00:15:54.486 fused_ordering(311) 00:15:54.486 fused_ordering(312) 00:15:54.486 fused_ordering(313) 00:15:54.486 fused_ordering(314) 00:15:54.486 fused_ordering(315) 00:15:54.486 fused_ordering(316) 00:15:54.486 fused_ordering(317) 00:15:54.486 fused_ordering(318) 00:15:54.486 fused_ordering(319) 00:15:54.486 fused_ordering(320) 00:15:54.486 fused_ordering(321) 00:15:54.486 fused_ordering(322) 00:15:54.486 fused_ordering(323) 00:15:54.486 fused_ordering(324) 00:15:54.486 fused_ordering(325) 00:15:54.486 fused_ordering(326) 00:15:54.486 fused_ordering(327) 00:15:54.486 fused_ordering(328) 00:15:54.486 fused_ordering(329) 00:15:54.486 fused_ordering(330) 00:15:54.486 fused_ordering(331) 00:15:54.486 fused_ordering(332) 00:15:54.486 fused_ordering(333) 00:15:54.486 fused_ordering(334) 00:15:54.486 fused_ordering(335) 00:15:54.486 fused_ordering(336) 00:15:54.486 fused_ordering(337) 00:15:54.486 fused_ordering(338) 00:15:54.486 fused_ordering(339) 00:15:54.486 fused_ordering(340) 00:15:54.486 fused_ordering(341) 00:15:54.486 fused_ordering(342) 00:15:54.486 fused_ordering(343) 00:15:54.486 fused_ordering(344) 00:15:54.486 fused_ordering(345) 00:15:54.486 fused_ordering(346) 00:15:54.486 fused_ordering(347) 00:15:54.486 fused_ordering(348) 00:15:54.486 fused_ordering(349) 00:15:54.486 fused_ordering(350) 00:15:54.486 fused_ordering(351) 00:15:54.486 fused_ordering(352) 00:15:54.486 fused_ordering(353) 00:15:54.486 fused_ordering(354) 00:15:54.486 fused_ordering(355) 00:15:54.486 fused_ordering(356) 00:15:54.486 fused_ordering(357) 00:15:54.486 fused_ordering(358) 00:15:54.486 fused_ordering(359) 00:15:54.486 fused_ordering(360) 00:15:54.486 fused_ordering(361) 00:15:54.486 fused_ordering(362) 00:15:54.486 fused_ordering(363) 00:15:54.486 fused_ordering(364) 00:15:54.486 fused_ordering(365) 00:15:54.486 fused_ordering(366) 00:15:54.486 fused_ordering(367) 00:15:54.486 fused_ordering(368) 00:15:54.486 fused_ordering(369) 00:15:54.486 fused_ordering(370) 00:15:54.486 fused_ordering(371) 00:15:54.486 fused_ordering(372) 00:15:54.486 fused_ordering(373) 00:15:54.486 fused_ordering(374) 00:15:54.486 fused_ordering(375) 00:15:54.486 fused_ordering(376) 00:15:54.486 fused_ordering(377) 00:15:54.486 fused_ordering(378) 00:15:54.486 fused_ordering(379) 00:15:54.486 fused_ordering(380) 00:15:54.486 fused_ordering(381) 00:15:54.486 fused_ordering(382) 00:15:54.486 fused_ordering(383) 00:15:54.486 fused_ordering(384) 00:15:54.486 fused_ordering(385) 00:15:54.486 fused_ordering(386) 00:15:54.486 fused_ordering(387) 00:15:54.486 fused_ordering(388) 00:15:54.486 fused_ordering(389) 00:15:54.486 fused_ordering(390) 00:15:54.486 fused_ordering(391) 00:15:54.486 fused_ordering(392) 00:15:54.486 fused_ordering(393) 00:15:54.486 fused_ordering(394) 00:15:54.486 fused_ordering(395) 00:15:54.486 fused_ordering(396) 00:15:54.486 fused_ordering(397) 00:15:54.486 fused_ordering(398) 00:15:54.486 fused_ordering(399) 00:15:54.486 fused_ordering(400) 00:15:54.486 fused_ordering(401) 00:15:54.486 fused_ordering(402) 00:15:54.486 fused_ordering(403) 00:15:54.486 fused_ordering(404) 00:15:54.486 fused_ordering(405) 00:15:54.486 fused_ordering(406) 00:15:54.486 fused_ordering(407) 00:15:54.486 fused_ordering(408) 00:15:54.486 fused_ordering(409) 00:15:54.486 fused_ordering(410) 00:15:54.747 fused_ordering(411) 00:15:54.747 fused_ordering(412) 00:15:54.747 fused_ordering(413) 00:15:54.747 fused_ordering(414) 00:15:54.747 fused_ordering(415) 00:15:54.747 fused_ordering(416) 00:15:54.747 fused_ordering(417) 00:15:54.747 fused_ordering(418) 00:15:54.747 fused_ordering(419) 00:15:54.747 fused_ordering(420) 00:15:54.747 fused_ordering(421) 00:15:54.747 fused_ordering(422) 00:15:54.747 fused_ordering(423) 00:15:54.747 fused_ordering(424) 00:15:54.747 fused_ordering(425) 00:15:54.747 fused_ordering(426) 00:15:54.747 fused_ordering(427) 00:15:54.747 fused_ordering(428) 00:15:54.747 fused_ordering(429) 00:15:54.747 fused_ordering(430) 00:15:54.747 fused_ordering(431) 00:15:54.747 fused_ordering(432) 00:15:54.747 fused_ordering(433) 00:15:54.747 fused_ordering(434) 00:15:54.747 fused_ordering(435) 00:15:54.747 fused_ordering(436) 00:15:54.747 fused_ordering(437) 00:15:54.747 fused_ordering(438) 00:15:54.747 fused_ordering(439) 00:15:54.747 fused_ordering(440) 00:15:54.747 fused_ordering(441) 00:15:54.747 fused_ordering(442) 00:15:54.747 fused_ordering(443) 00:15:54.747 fused_ordering(444) 00:15:54.747 fused_ordering(445) 00:15:54.747 fused_ordering(446) 00:15:54.747 fused_ordering(447) 00:15:54.747 fused_ordering(448) 00:15:54.747 fused_ordering(449) 00:15:54.747 fused_ordering(450) 00:15:54.747 fused_ordering(451) 00:15:54.747 fused_ordering(452) 00:15:54.747 fused_ordering(453) 00:15:54.747 fused_ordering(454) 00:15:54.747 fused_ordering(455) 00:15:54.747 fused_ordering(456) 00:15:54.747 fused_ordering(457) 00:15:54.747 fused_ordering(458) 00:15:54.747 fused_ordering(459) 00:15:54.747 fused_ordering(460) 00:15:54.747 fused_ordering(461) 00:15:54.747 fused_ordering(462) 00:15:54.747 fused_ordering(463) 00:15:54.747 fused_ordering(464) 00:15:54.747 fused_ordering(465) 00:15:54.747 fused_ordering(466) 00:15:54.747 fused_ordering(467) 00:15:54.747 fused_ordering(468) 00:15:54.747 fused_ordering(469) 00:15:54.747 fused_ordering(470) 00:15:54.747 fused_ordering(471) 00:15:54.747 fused_ordering(472) 00:15:54.747 fused_ordering(473) 00:15:54.747 fused_ordering(474) 00:15:54.747 fused_ordering(475) 00:15:54.747 fused_ordering(476) 00:15:54.747 fused_ordering(477) 00:15:54.747 fused_ordering(478) 00:15:54.747 fused_ordering(479) 00:15:54.747 fused_ordering(480) 00:15:54.747 fused_ordering(481) 00:15:54.747 fused_ordering(482) 00:15:54.747 fused_ordering(483) 00:15:54.747 fused_ordering(484) 00:15:54.747 fused_ordering(485) 00:15:54.747 fused_ordering(486) 00:15:54.747 fused_ordering(487) 00:15:54.747 fused_ordering(488) 00:15:54.747 fused_ordering(489) 00:15:54.747 fused_ordering(490) 00:15:54.747 fused_ordering(491) 00:15:54.747 fused_ordering(492) 00:15:54.747 fused_ordering(493) 00:15:54.747 fused_ordering(494) 00:15:54.747 fused_ordering(495) 00:15:54.747 fused_ordering(496) 00:15:54.747 fused_ordering(497) 00:15:54.747 fused_ordering(498) 00:15:54.747 fused_ordering(499) 00:15:54.747 fused_ordering(500) 00:15:54.747 fused_ordering(501) 00:15:54.747 fused_ordering(502) 00:15:54.747 fused_ordering(503) 00:15:54.747 fused_ordering(504) 00:15:54.747 fused_ordering(505) 00:15:54.747 fused_ordering(506) 00:15:54.747 fused_ordering(507) 00:15:54.747 fused_ordering(508) 00:15:54.747 fused_ordering(509) 00:15:54.747 fused_ordering(510) 00:15:54.747 fused_ordering(511) 00:15:54.747 fused_ordering(512) 00:15:54.747 fused_ordering(513) 00:15:54.747 fused_ordering(514) 00:15:54.747 fused_ordering(515) 00:15:54.747 fused_ordering(516) 00:15:54.747 fused_ordering(517) 00:15:54.747 fused_ordering(518) 00:15:54.747 fused_ordering(519) 00:15:54.747 fused_ordering(520) 00:15:54.747 fused_ordering(521) 00:15:54.747 fused_ordering(522) 00:15:54.747 fused_ordering(523) 00:15:54.747 fused_ordering(524) 00:15:54.747 fused_ordering(525) 00:15:54.747 fused_ordering(526) 00:15:54.747 fused_ordering(527) 00:15:54.747 fused_ordering(528) 00:15:54.747 fused_ordering(529) 00:15:54.747 fused_ordering(530) 00:15:54.747 fused_ordering(531) 00:15:54.747 fused_ordering(532) 00:15:54.747 fused_ordering(533) 00:15:54.747 fused_ordering(534) 00:15:54.747 fused_ordering(535) 00:15:54.747 fused_ordering(536) 00:15:54.747 fused_ordering(537) 00:15:54.747 fused_ordering(538) 00:15:54.747 fused_ordering(539) 00:15:54.748 fused_ordering(540) 00:15:54.748 fused_ordering(541) 00:15:54.748 fused_ordering(542) 00:15:54.748 fused_ordering(543) 00:15:54.748 fused_ordering(544) 00:15:54.748 fused_ordering(545) 00:15:54.748 fused_ordering(546) 00:15:54.748 fused_ordering(547) 00:15:54.748 fused_ordering(548) 00:15:54.748 fused_ordering(549) 00:15:54.748 fused_ordering(550) 00:15:54.748 fused_ordering(551) 00:15:54.748 fused_ordering(552) 00:15:54.748 fused_ordering(553) 00:15:54.748 fused_ordering(554) 00:15:54.748 fused_ordering(555) 00:15:54.748 fused_ordering(556) 00:15:54.748 fused_ordering(557) 00:15:54.748 fused_ordering(558) 00:15:54.748 fused_ordering(559) 00:15:54.748 fused_ordering(560) 00:15:54.748 fused_ordering(561) 00:15:54.748 fused_ordering(562) 00:15:54.748 fused_ordering(563) 00:15:54.748 fused_ordering(564) 00:15:54.748 fused_ordering(565) 00:15:54.748 fused_ordering(566) 00:15:54.748 fused_ordering(567) 00:15:54.748 fused_ordering(568) 00:15:54.748 fused_ordering(569) 00:15:54.748 fused_ordering(570) 00:15:54.748 fused_ordering(571) 00:15:54.748 fused_ordering(572) 00:15:54.748 fused_ordering(573) 00:15:54.748 fused_ordering(574) 00:15:54.748 fused_ordering(575) 00:15:54.748 fused_ordering(576) 00:15:54.748 fused_ordering(577) 00:15:54.748 fused_ordering(578) 00:15:54.748 fused_ordering(579) 00:15:54.748 fused_ordering(580) 00:15:54.748 fused_ordering(581) 00:15:54.748 fused_ordering(582) 00:15:54.748 fused_ordering(583) 00:15:54.748 fused_ordering(584) 00:15:54.748 fused_ordering(585) 00:15:54.748 fused_ordering(586) 00:15:54.748 fused_ordering(587) 00:15:54.748 fused_ordering(588) 00:15:54.748 fused_ordering(589) 00:15:54.748 fused_ordering(590) 00:15:54.748 fused_ordering(591) 00:15:54.748 fused_ordering(592) 00:15:54.748 fused_ordering(593) 00:15:54.748 fused_ordering(594) 00:15:54.748 fused_ordering(595) 00:15:54.748 fused_ordering(596) 00:15:54.748 fused_ordering(597) 00:15:54.748 fused_ordering(598) 00:15:54.748 fused_ordering(599) 00:15:54.748 fused_ordering(600) 00:15:54.748 fused_ordering(601) 00:15:54.748 fused_ordering(602) 00:15:54.748 fused_ordering(603) 00:15:54.748 fused_ordering(604) 00:15:54.748 fused_ordering(605) 00:15:54.748 fused_ordering(606) 00:15:54.748 fused_ordering(607) 00:15:54.748 fused_ordering(608) 00:15:54.748 fused_ordering(609) 00:15:54.748 fused_ordering(610) 00:15:54.748 fused_ordering(611) 00:15:54.748 fused_ordering(612) 00:15:54.748 fused_ordering(613) 00:15:54.748 fused_ordering(614) 00:15:54.748 fused_ordering(615) 00:15:55.319 fused_ordering(616) 00:15:55.319 fused_ordering(617) 00:15:55.319 fused_ordering(618) 00:15:55.319 fused_ordering(619) 00:15:55.319 fused_ordering(620) 00:15:55.319 fused_ordering(621) 00:15:55.319 fused_ordering(622) 00:15:55.319 fused_ordering(623) 00:15:55.319 fused_ordering(624) 00:15:55.319 fused_ordering(625) 00:15:55.319 fused_ordering(626) 00:15:55.319 fused_ordering(627) 00:15:55.319 fused_ordering(628) 00:15:55.319 fused_ordering(629) 00:15:55.319 fused_ordering(630) 00:15:55.319 fused_ordering(631) 00:15:55.319 fused_ordering(632) 00:15:55.319 fused_ordering(633) 00:15:55.319 fused_ordering(634) 00:15:55.319 fused_ordering(635) 00:15:55.319 fused_ordering(636) 00:15:55.319 fused_ordering(637) 00:15:55.319 fused_ordering(638) 00:15:55.319 fused_ordering(639) 00:15:55.319 fused_ordering(640) 00:15:55.319 fused_ordering(641) 00:15:55.319 fused_ordering(642) 00:15:55.319 fused_ordering(643) 00:15:55.319 fused_ordering(644) 00:15:55.319 fused_ordering(645) 00:15:55.319 fused_ordering(646) 00:15:55.319 fused_ordering(647) 00:15:55.319 fused_ordering(648) 00:15:55.319 fused_ordering(649) 00:15:55.319 fused_ordering(650) 00:15:55.319 fused_ordering(651) 00:15:55.319 fused_ordering(652) 00:15:55.319 fused_ordering(653) 00:15:55.319 fused_ordering(654) 00:15:55.319 fused_ordering(655) 00:15:55.319 fused_ordering(656) 00:15:55.319 fused_ordering(657) 00:15:55.319 fused_ordering(658) 00:15:55.319 fused_ordering(659) 00:15:55.319 fused_ordering(660) 00:15:55.319 fused_ordering(661) 00:15:55.319 fused_ordering(662) 00:15:55.319 fused_ordering(663) 00:15:55.319 fused_ordering(664) 00:15:55.319 fused_ordering(665) 00:15:55.319 fused_ordering(666) 00:15:55.319 fused_ordering(667) 00:15:55.319 fused_ordering(668) 00:15:55.319 fused_ordering(669) 00:15:55.319 fused_ordering(670) 00:15:55.319 fused_ordering(671) 00:15:55.319 fused_ordering(672) 00:15:55.319 fused_ordering(673) 00:15:55.319 fused_ordering(674) 00:15:55.319 fused_ordering(675) 00:15:55.319 fused_ordering(676) 00:15:55.319 fused_ordering(677) 00:15:55.319 fused_ordering(678) 00:15:55.319 fused_ordering(679) 00:15:55.319 fused_ordering(680) 00:15:55.319 fused_ordering(681) 00:15:55.319 fused_ordering(682) 00:15:55.319 fused_ordering(683) 00:15:55.319 fused_ordering(684) 00:15:55.319 fused_ordering(685) 00:15:55.319 fused_ordering(686) 00:15:55.319 fused_ordering(687) 00:15:55.319 fused_ordering(688) 00:15:55.319 fused_ordering(689) 00:15:55.319 fused_ordering(690) 00:15:55.319 fused_ordering(691) 00:15:55.319 fused_ordering(692) 00:15:55.319 fused_ordering(693) 00:15:55.319 fused_ordering(694) 00:15:55.319 fused_ordering(695) 00:15:55.319 fused_ordering(696) 00:15:55.319 fused_ordering(697) 00:15:55.319 fused_ordering(698) 00:15:55.319 fused_ordering(699) 00:15:55.319 fused_ordering(700) 00:15:55.319 fused_ordering(701) 00:15:55.319 fused_ordering(702) 00:15:55.319 fused_ordering(703) 00:15:55.319 fused_ordering(704) 00:15:55.319 fused_ordering(705) 00:15:55.319 fused_ordering(706) 00:15:55.319 fused_ordering(707) 00:15:55.319 fused_ordering(708) 00:15:55.319 fused_ordering(709) 00:15:55.319 fused_ordering(710) 00:15:55.319 fused_ordering(711) 00:15:55.319 fused_ordering(712) 00:15:55.319 fused_ordering(713) 00:15:55.319 fused_ordering(714) 00:15:55.319 fused_ordering(715) 00:15:55.319 fused_ordering(716) 00:15:55.319 fused_ordering(717) 00:15:55.319 fused_ordering(718) 00:15:55.319 fused_ordering(719) 00:15:55.319 fused_ordering(720) 00:15:55.319 fused_ordering(721) 00:15:55.319 fused_ordering(722) 00:15:55.319 fused_ordering(723) 00:15:55.319 fused_ordering(724) 00:15:55.319 fused_ordering(725) 00:15:55.319 fused_ordering(726) 00:15:55.319 fused_ordering(727) 00:15:55.319 fused_ordering(728) 00:15:55.319 fused_ordering(729) 00:15:55.319 fused_ordering(730) 00:15:55.319 fused_ordering(731) 00:15:55.319 fused_ordering(732) 00:15:55.319 fused_ordering(733) 00:15:55.319 fused_ordering(734) 00:15:55.319 fused_ordering(735) 00:15:55.319 fused_ordering(736) 00:15:55.319 fused_ordering(737) 00:15:55.319 fused_ordering(738) 00:15:55.319 fused_ordering(739) 00:15:55.319 fused_ordering(740) 00:15:55.319 fused_ordering(741) 00:15:55.319 fused_ordering(742) 00:15:55.319 fused_ordering(743) 00:15:55.319 fused_ordering(744) 00:15:55.319 fused_ordering(745) 00:15:55.319 fused_ordering(746) 00:15:55.319 fused_ordering(747) 00:15:55.319 fused_ordering(748) 00:15:55.319 fused_ordering(749) 00:15:55.319 fused_ordering(750) 00:15:55.319 fused_ordering(751) 00:15:55.319 fused_ordering(752) 00:15:55.319 fused_ordering(753) 00:15:55.319 fused_ordering(754) 00:15:55.319 fused_ordering(755) 00:15:55.319 fused_ordering(756) 00:15:55.319 fused_ordering(757) 00:15:55.319 fused_ordering(758) 00:15:55.319 fused_ordering(759) 00:15:55.320 fused_ordering(760) 00:15:55.320 fused_ordering(761) 00:15:55.320 fused_ordering(762) 00:15:55.320 fused_ordering(763) 00:15:55.320 fused_ordering(764) 00:15:55.320 fused_ordering(765) 00:15:55.320 fused_ordering(766) 00:15:55.320 fused_ordering(767) 00:15:55.320 fused_ordering(768) 00:15:55.320 fused_ordering(769) 00:15:55.320 fused_ordering(770) 00:15:55.320 fused_ordering(771) 00:15:55.320 fused_ordering(772) 00:15:55.320 fused_ordering(773) 00:15:55.320 fused_ordering(774) 00:15:55.320 fused_ordering(775) 00:15:55.320 fused_ordering(776) 00:15:55.320 fused_ordering(777) 00:15:55.320 fused_ordering(778) 00:15:55.320 fused_ordering(779) 00:15:55.320 fused_ordering(780) 00:15:55.320 fused_ordering(781) 00:15:55.320 fused_ordering(782) 00:15:55.320 fused_ordering(783) 00:15:55.320 fused_ordering(784) 00:15:55.320 fused_ordering(785) 00:15:55.320 fused_ordering(786) 00:15:55.320 fused_ordering(787) 00:15:55.320 fused_ordering(788) 00:15:55.320 fused_ordering(789) 00:15:55.320 fused_ordering(790) 00:15:55.320 fused_ordering(791) 00:15:55.320 fused_ordering(792) 00:15:55.320 fused_ordering(793) 00:15:55.320 fused_ordering(794) 00:15:55.320 fused_ordering(795) 00:15:55.320 fused_ordering(796) 00:15:55.320 fused_ordering(797) 00:15:55.320 fused_ordering(798) 00:15:55.320 fused_ordering(799) 00:15:55.320 fused_ordering(800) 00:15:55.320 fused_ordering(801) 00:15:55.320 fused_ordering(802) 00:15:55.320 fused_ordering(803) 00:15:55.320 fused_ordering(804) 00:15:55.320 fused_ordering(805) 00:15:55.320 fused_ordering(806) 00:15:55.320 fused_ordering(807) 00:15:55.320 fused_ordering(808) 00:15:55.320 fused_ordering(809) 00:15:55.320 fused_ordering(810) 00:15:55.320 fused_ordering(811) 00:15:55.320 fused_ordering(812) 00:15:55.320 fused_ordering(813) 00:15:55.320 fused_ordering(814) 00:15:55.320 fused_ordering(815) 00:15:55.320 fused_ordering(816) 00:15:55.320 fused_ordering(817) 00:15:55.320 fused_ordering(818) 00:15:55.320 fused_ordering(819) 00:15:55.320 fused_ordering(820) 00:15:55.891 fused_ordering(821) 00:15:55.891 fused_ordering(822) 00:15:55.891 fused_ordering(823) 00:15:55.891 fused_ordering(824) 00:15:55.891 fused_ordering(825) 00:15:55.891 fused_ordering(826) 00:15:55.891 fused_ordering(827) 00:15:55.891 fused_ordering(828) 00:15:55.891 fused_ordering(829) 00:15:55.891 fused_ordering(830) 00:15:55.891 fused_ordering(831) 00:15:55.891 fused_ordering(832) 00:15:55.891 fused_ordering(833) 00:15:55.891 fused_ordering(834) 00:15:55.891 fused_ordering(835) 00:15:55.891 fused_ordering(836) 00:15:55.891 fused_ordering(837) 00:15:55.891 fused_ordering(838) 00:15:55.891 fused_ordering(839) 00:15:55.891 fused_ordering(840) 00:15:55.891 fused_ordering(841) 00:15:55.891 fused_ordering(842) 00:15:55.891 fused_ordering(843) 00:15:55.891 fused_ordering(844) 00:15:55.891 fused_ordering(845) 00:15:55.891 fused_ordering(846) 00:15:55.891 fused_ordering(847) 00:15:55.891 fused_ordering(848) 00:15:55.891 fused_ordering(849) 00:15:55.891 fused_ordering(850) 00:15:55.891 fused_ordering(851) 00:15:55.891 fused_ordering(852) 00:15:55.891 fused_ordering(853) 00:15:55.891 fused_ordering(854) 00:15:55.891 fused_ordering(855) 00:15:55.891 fused_ordering(856) 00:15:55.891 fused_ordering(857) 00:15:55.891 fused_ordering(858) 00:15:55.891 fused_ordering(859) 00:15:55.891 fused_ordering(860) 00:15:55.891 fused_ordering(861) 00:15:55.891 fused_ordering(862) 00:15:55.891 fused_ordering(863) 00:15:55.891 fused_ordering(864) 00:15:55.891 fused_ordering(865) 00:15:55.891 fused_ordering(866) 00:15:55.891 fused_ordering(867) 00:15:55.891 fused_ordering(868) 00:15:55.891 fused_ordering(869) 00:15:55.891 fused_ordering(870) 00:15:55.891 fused_ordering(871) 00:15:55.891 fused_ordering(872) 00:15:55.891 fused_ordering(873) 00:15:55.891 fused_ordering(874) 00:15:55.891 fused_ordering(875) 00:15:55.891 fused_ordering(876) 00:15:55.891 fused_ordering(877) 00:15:55.891 fused_ordering(878) 00:15:55.891 fused_ordering(879) 00:15:55.891 fused_ordering(880) 00:15:55.891 fused_ordering(881) 00:15:55.891 fused_ordering(882) 00:15:55.891 fused_ordering(883) 00:15:55.891 fused_ordering(884) 00:15:55.891 fused_ordering(885) 00:15:55.891 fused_ordering(886) 00:15:55.891 fused_ordering(887) 00:15:55.891 fused_ordering(888) 00:15:55.891 fused_ordering(889) 00:15:55.891 fused_ordering(890) 00:15:55.891 fused_ordering(891) 00:15:55.891 fused_ordering(892) 00:15:55.891 fused_ordering(893) 00:15:55.891 fused_ordering(894) 00:15:55.891 fused_ordering(895) 00:15:55.891 fused_ordering(896) 00:15:55.891 fused_ordering(897) 00:15:55.891 fused_ordering(898) 00:15:55.891 fused_ordering(899) 00:15:55.891 fused_ordering(900) 00:15:55.891 fused_ordering(901) 00:15:55.891 fused_ordering(902) 00:15:55.891 fused_ordering(903) 00:15:55.891 fused_ordering(904) 00:15:55.891 fused_ordering(905) 00:15:55.891 fused_ordering(906) 00:15:55.891 fused_ordering(907) 00:15:55.891 fused_ordering(908) 00:15:55.891 fused_ordering(909) 00:15:55.891 fused_ordering(910) 00:15:55.891 fused_ordering(911) 00:15:55.891 fused_ordering(912) 00:15:55.891 fused_ordering(913) 00:15:55.891 fused_ordering(914) 00:15:55.891 fused_ordering(915) 00:15:55.891 fused_ordering(916) 00:15:55.891 fused_ordering(917) 00:15:55.891 fused_ordering(918) 00:15:55.891 fused_ordering(919) 00:15:55.891 fused_ordering(920) 00:15:55.891 fused_ordering(921) 00:15:55.891 fused_ordering(922) 00:15:55.891 fused_ordering(923) 00:15:55.891 fused_ordering(924) 00:15:55.891 fused_ordering(925) 00:15:55.891 fused_ordering(926) 00:15:55.891 fused_ordering(927) 00:15:55.891 fused_ordering(928) 00:15:55.891 fused_ordering(929) 00:15:55.891 fused_ordering(930) 00:15:55.891 fused_ordering(931) 00:15:55.891 fused_ordering(932) 00:15:55.891 fused_ordering(933) 00:15:55.891 fused_ordering(934) 00:15:55.891 fused_ordering(935) 00:15:55.891 fused_ordering(936) 00:15:55.891 fused_ordering(937) 00:15:55.891 fused_ordering(938) 00:15:55.891 fused_ordering(939) 00:15:55.891 fused_ordering(940) 00:15:55.891 fused_ordering(941) 00:15:55.891 fused_ordering(942) 00:15:55.891 fused_ordering(943) 00:15:55.891 fused_ordering(944) 00:15:55.891 fused_ordering(945) 00:15:55.891 fused_ordering(946) 00:15:55.891 fused_ordering(947) 00:15:55.891 fused_ordering(948) 00:15:55.891 fused_ordering(949) 00:15:55.891 fused_ordering(950) 00:15:55.891 fused_ordering(951) 00:15:55.891 fused_ordering(952) 00:15:55.891 fused_ordering(953) 00:15:55.891 fused_ordering(954) 00:15:55.891 fused_ordering(955) 00:15:55.891 fused_ordering(956) 00:15:55.891 fused_ordering(957) 00:15:55.891 fused_ordering(958) 00:15:55.891 fused_ordering(959) 00:15:55.891 fused_ordering(960) 00:15:55.891 fused_ordering(961) 00:15:55.891 fused_ordering(962) 00:15:55.891 fused_ordering(963) 00:15:55.891 fused_ordering(964) 00:15:55.891 fused_ordering(965) 00:15:55.891 fused_ordering(966) 00:15:55.891 fused_ordering(967) 00:15:55.891 fused_ordering(968) 00:15:55.891 fused_ordering(969) 00:15:55.891 fused_ordering(970) 00:15:55.892 fused_ordering(971) 00:15:55.892 fused_ordering(972) 00:15:55.892 fused_ordering(973) 00:15:55.892 fused_ordering(974) 00:15:55.892 fused_ordering(975) 00:15:55.892 fused_ordering(976) 00:15:55.892 fused_ordering(977) 00:15:55.892 fused_ordering(978) 00:15:55.892 fused_ordering(979) 00:15:55.892 fused_ordering(980) 00:15:55.892 fused_ordering(981) 00:15:55.892 fused_ordering(982) 00:15:55.892 fused_ordering(983) 00:15:55.892 fused_ordering(984) 00:15:55.892 fused_ordering(985) 00:15:55.892 fused_ordering(986) 00:15:55.892 fused_ordering(987) 00:15:55.892 fused_ordering(988) 00:15:55.892 fused_ordering(989) 00:15:55.892 fused_ordering(990) 00:15:55.892 fused_ordering(991) 00:15:55.892 fused_ordering(992) 00:15:55.892 fused_ordering(993) 00:15:55.892 fused_ordering(994) 00:15:55.892 fused_ordering(995) 00:15:55.892 fused_ordering(996) 00:15:55.892 fused_ordering(997) 00:15:55.892 fused_ordering(998) 00:15:55.892 fused_ordering(999) 00:15:55.892 fused_ordering(1000) 00:15:55.892 fused_ordering(1001) 00:15:55.892 fused_ordering(1002) 00:15:55.892 fused_ordering(1003) 00:15:55.892 fused_ordering(1004) 00:15:55.892 fused_ordering(1005) 00:15:55.892 fused_ordering(1006) 00:15:55.892 fused_ordering(1007) 00:15:55.892 fused_ordering(1008) 00:15:55.892 fused_ordering(1009) 00:15:55.892 fused_ordering(1010) 00:15:55.892 fused_ordering(1011) 00:15:55.892 fused_ordering(1012) 00:15:55.892 fused_ordering(1013) 00:15:55.892 fused_ordering(1014) 00:15:55.892 fused_ordering(1015) 00:15:55.892 fused_ordering(1016) 00:15:55.892 fused_ordering(1017) 00:15:55.892 fused_ordering(1018) 00:15:55.892 fused_ordering(1019) 00:15:55.892 fused_ordering(1020) 00:15:55.892 fused_ordering(1021) 00:15:55.892 fused_ordering(1022) 00:15:55.892 fused_ordering(1023) 00:15:55.892 17:26:04 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:55.892 17:26:04 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:55.892 17:26:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:55.892 17:26:04 -- nvmf/common.sh@116 -- # sync 00:15:55.892 17:26:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:55.892 17:26:04 -- nvmf/common.sh@119 -- # set +e 00:15:55.892 17:26:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:55.892 17:26:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:55.892 rmmod nvme_tcp 00:15:55.892 rmmod nvme_fabrics 00:15:55.892 rmmod nvme_keyring 00:15:55.892 17:26:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:55.892 17:26:04 -- nvmf/common.sh@123 -- # set -e 00:15:55.892 17:26:04 -- nvmf/common.sh@124 -- # return 0 00:15:55.892 17:26:04 -- nvmf/common.sh@477 -- # '[' -n 3127818 ']' 00:15:55.892 17:26:04 -- nvmf/common.sh@478 -- # killprocess 3127818 00:15:55.892 17:26:04 -- common/autotest_common.sh@926 -- # '[' -z 3127818 ']' 00:15:55.892 17:26:04 -- common/autotest_common.sh@930 -- # kill -0 3127818 00:15:55.892 17:26:04 -- common/autotest_common.sh@931 -- # uname 00:15:55.892 17:26:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.892 17:26:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3127818 00:15:56.152 17:26:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:56.152 17:26:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:56.152 17:26:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3127818' 00:15:56.152 killing process with pid 3127818 00:15:56.152 17:26:04 -- common/autotest_common.sh@945 -- # kill 3127818 00:15:56.152 17:26:04 -- common/autotest_common.sh@950 -- # wait 3127818 00:15:56.152 17:26:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:56.152 17:26:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:56.152 17:26:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:56.152 17:26:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.152 17:26:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:56.152 17:26:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.152 17:26:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.152 17:26:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.696 17:26:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:58.696 00:15:58.696 real 0m13.017s 00:15:58.696 user 0m6.963s 00:15:58.696 sys 0m6.683s 00:15:58.696 17:26:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.696 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 ************************************ 00:15:58.696 END TEST nvmf_fused_ordering 00:15:58.696 ************************************ 00:15:58.696 17:26:06 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:58.696 17:26:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:58.696 17:26:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.696 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 ************************************ 00:15:58.696 START TEST nvmf_delete_subsystem 00:15:58.696 ************************************ 00:15:58.696 17:26:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:58.696 * Looking for test storage... 00:15:58.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.696 17:26:06 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.696 17:26:06 -- nvmf/common.sh@7 -- # uname -s 00:15:58.696 17:26:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.696 17:26:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.696 17:26:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.696 17:26:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.696 17:26:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.696 17:26:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.696 17:26:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.696 17:26:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.696 17:26:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.696 17:26:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.696 17:26:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:58.696 17:26:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:58.696 17:26:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.696 17:26:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.696 17:26:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.696 17:26:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.696 17:26:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.696 17:26:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.696 17:26:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.696 17:26:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.696 17:26:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.696 17:26:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.696 17:26:06 -- paths/export.sh@5 -- # export PATH 00:15:58.697 17:26:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.697 17:26:06 -- nvmf/common.sh@46 -- # : 0 00:15:58.697 17:26:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:58.697 17:26:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:58.697 17:26:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:58.697 17:26:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.697 17:26:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.697 17:26:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:58.697 17:26:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:58.697 17:26:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:58.697 17:26:06 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:58.697 17:26:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:58.697 17:26:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.697 17:26:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:58.697 17:26:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:58.697 17:26:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:58.697 17:26:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.697 17:26:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.697 17:26:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.697 17:26:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:58.697 17:26:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:58.697 17:26:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:58.697 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:16:06.844 17:26:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.844 17:26:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:06.844 17:26:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:06.844 17:26:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:06.844 17:26:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:06.844 17:26:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:06.844 17:26:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:06.844 17:26:13 -- nvmf/common.sh@294 -- # net_devs=() 00:16:06.844 17:26:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:06.844 17:26:13 -- nvmf/common.sh@295 -- # e810=() 00:16:06.844 17:26:13 -- nvmf/common.sh@295 -- # local -ga e810 00:16:06.844 17:26:13 -- nvmf/common.sh@296 -- # x722=() 00:16:06.844 17:26:13 -- nvmf/common.sh@296 -- # local -ga x722 00:16:06.844 17:26:13 -- nvmf/common.sh@297 -- # mlx=() 00:16:06.844 17:26:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:06.844 17:26:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.844 17:26:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:06.844 17:26:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:06.844 17:26:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:06.844 17:26:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.844 17:26:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:06.844 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:06.844 17:26:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.844 17:26:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:06.844 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:06.844 17:26:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:06.844 17:26:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:06.844 17:26:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.844 17:26:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.844 17:26:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.844 17:26:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.844 17:26:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:06.844 Found net devices under 0000:31:00.0: cvl_0_0 00:16:06.844 17:26:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.844 17:26:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.844 17:26:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.844 17:26:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.844 17:26:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.844 17:26:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:06.844 Found net devices under 0000:31:00.1: cvl_0_1 00:16:06.845 17:26:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.845 17:26:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:06.845 17:26:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:06.845 17:26:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:06.845 17:26:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:06.845 17:26:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:06.845 17:26:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.845 17:26:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.845 17:26:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.845 17:26:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:06.845 17:26:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.845 17:26:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.845 17:26:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:06.845 17:26:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.845 17:26:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.845 17:26:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:06.845 17:26:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:06.845 17:26:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.845 17:26:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.845 17:26:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.845 17:26:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.845 17:26:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:06.845 17:26:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.845 17:26:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.845 17:26:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.845 17:26:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:06.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:16:06.845 00:16:06.845 --- 10.0.0.2 ping statistics --- 00:16:06.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.845 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:16:06.845 17:26:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:16:06.845 00:16:06.845 --- 10.0.0.1 ping statistics --- 00:16:06.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.845 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:16:06.845 17:26:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.845 17:26:14 -- nvmf/common.sh@410 -- # return 0 00:16:06.845 17:26:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:06.845 17:26:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.845 17:26:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:06.845 17:26:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:06.845 17:26:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.845 17:26:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:06.845 17:26:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:06.845 17:26:14 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:06.845 17:26:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:06.845 17:26:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:06.845 17:26:14 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 17:26:14 -- nvmf/common.sh@469 -- # nvmfpid=3132809 00:16:06.845 17:26:14 -- nvmf/common.sh@470 -- # waitforlisten 3132809 00:16:06.845 17:26:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:06.845 17:26:14 -- common/autotest_common.sh@819 -- # '[' -z 3132809 ']' 00:16:06.845 17:26:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.845 17:26:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.845 17:26:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.845 17:26:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.845 17:26:14 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 [2024-10-13 17:26:14.294179] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:06.845 [2024-10-13 17:26:14.294244] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.845 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.845 [2024-10-13 17:26:14.367743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.845 [2024-10-13 17:26:14.404878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:06.845 [2024-10-13 17:26:14.405026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.845 [2024-10-13 17:26:14.405035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.845 [2024-10-13 17:26:14.405043] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.845 [2024-10-13 17:26:14.405136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.845 [2024-10-13 17:26:14.405151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.845 17:26:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.845 17:26:15 -- common/autotest_common.sh@852 -- # return 0 00:16:06.845 17:26:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:06.845 17:26:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 17:26:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.845 17:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 [2024-10-13 17:26:15.140733] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.845 17:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:06.845 17:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 17:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.845 17:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 [2024-10-13 17:26:15.156889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.845 17:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:06.845 17:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 NULL1 00:16:06.845 17:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:06.845 17:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 Delay0 00:16:06.845 17:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.845 17:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.845 17:26:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 17:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@28 -- # perf_pid=3132941 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:06.845 17:26:15 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:06.845 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.845 [2024-10-13 17:26:15.241518] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:08.756 17:26:17 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.756 17:26:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.756 17:26:17 -- common/autotest_common.sh@10 -- # set +x 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 starting I/O failed: -6 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 [2024-10-13 17:26:17.444813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x593a80 is same with the state(5) to be set 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Read completed with error (sct=0, sc=8) 00:16:09.017 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 [2024-10-13 17:26:17.445242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b1140 is same with the state(5) to be set 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 starting I/O failed: -6 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 [2024-10-13 17:26:17.450281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd51c000c00 is same with the state(5) to be set 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Write completed with error (sct=0, sc=8) 00:16:09.018 Read completed with error (sct=0, sc=8) 00:16:09.970 [2024-10-13 17:26:18.421560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x594f40 is same with the state(5) to be set 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 [2024-10-13 17:26:18.448389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5937d0 is same with the state(5) to be set 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 [2024-10-13 17:26:18.448471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x593d30 is same with the state(5) to be set 00:16:09.970 Write completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.970 Read completed with error (sct=0, sc=8) 00:16:09.971 Write completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 [2024-10-13 17:26:18.452320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd51c00c600 is same with the state(5) to be set 00:16:09.971 Write completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Write completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Write completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 Read completed with error (sct=0, sc=8) 00:16:09.971 [2024-10-13 17:26:18.452398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd51c00bf20 is same with the state(5) to be set 00:16:09.971 [2024-10-13 17:26:18.452870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x594f40 (9): Bad file descriptor 00:16:09.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:09.971 Initializing NVMe Controllers 00:16:09.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:09.971 Controller IO queue size 128, less than required. 00:16:09.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:09.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:09.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:09.971 Initialization complete. Launching workers. 00:16:09.971 ======================================================== 00:16:09.971 Latency(us) 00:16:09.971 Device Information : IOPS MiB/s Average min max 00:16:09.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.99 0.08 912960.18 430.77 1005106.32 00:16:09.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.98 0.08 911952.00 279.08 1009413.07 00:16:09.971 ======================================================== 00:16:09.971 Total : 322.97 0.16 912454.53 279.08 1009413.07 00:16:09.971 00:16:09.971 17:26:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.971 17:26:18 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:09.971 17:26:18 -- target/delete_subsystem.sh@35 -- # kill -0 3132941 00:16:09.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3132941) - No such process 00:16:09.971 17:26:18 -- target/delete_subsystem.sh@45 -- # NOT wait 3132941 00:16:09.971 17:26:18 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.971 17:26:18 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3132941 00:16:09.971 17:26:18 -- common/autotest_common.sh@628 -- # local arg=wait 00:16:09.971 17:26:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.971 17:26:18 -- common/autotest_common.sh@632 -- # type -t wait 00:16:09.971 17:26:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.971 17:26:18 -- common/autotest_common.sh@643 -- # wait 3132941 00:16:09.971 17:26:18 -- common/autotest_common.sh@643 -- # es=1 00:16:09.971 17:26:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.971 17:26:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.971 17:26:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.971 17:26:18 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.971 17:26:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.971 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.971 17:26:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.971 17:26:18 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.971 17:26:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.971 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.971 [2024-10-13 17:26:18.480124] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.971 17:26:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.971 17:26:18 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.971 17:26:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.971 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:10.232 17:26:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.232 17:26:18 -- target/delete_subsystem.sh@54 -- # perf_pid=3133627 00:16:10.232 17:26:18 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:10.232 17:26:18 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:10.232 17:26:18 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:10.232 17:26:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:10.232 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.232 [2024-10-13 17:26:18.551138] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:10.494 17:26:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:10.494 17:26:19 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:10.494 17:26:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:11.066 17:26:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:11.066 17:26:19 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:11.066 17:26:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:11.637 17:26:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:11.637 17:26:20 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:11.637 17:26:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.209 17:26:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.209 17:26:20 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:12.209 17:26:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.781 17:26:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.781 17:26:21 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:12.781 17:26:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.042 17:26:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.042 17:26:21 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:13.042 17:26:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.303 Initializing NVMe Controllers 00:16:13.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:13.303 Controller IO queue size 128, less than required. 00:16:13.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:13.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:13.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:13.303 Initialization complete. Launching workers. 00:16:13.303 ======================================================== 00:16:13.303 Latency(us) 00:16:13.303 Device Information : IOPS MiB/s Average min max 00:16:13.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002255.94 1000141.38 1041744.16 00:16:13.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003023.91 1000252.24 1009804.98 00:16:13.303 ======================================================== 00:16:13.303 Total : 256.00 0.12 1002639.93 1000141.38 1041744.16 00:16:13.303 00:16:13.564 17:26:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.564 17:26:22 -- target/delete_subsystem.sh@57 -- # kill -0 3133627 00:16:13.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3133627) - No such process 00:16:13.564 17:26:22 -- target/delete_subsystem.sh@67 -- # wait 3133627 00:16:13.564 17:26:22 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:13.564 17:26:22 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:13.564 17:26:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.564 17:26:22 -- nvmf/common.sh@116 -- # sync 00:16:13.564 17:26:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.564 17:26:22 -- nvmf/common.sh@119 -- # set +e 00:16:13.564 17:26:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.564 17:26:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.564 rmmod nvme_tcp 00:16:13.564 rmmod nvme_fabrics 00:16:13.564 rmmod nvme_keyring 00:16:13.825 17:26:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.825 17:26:22 -- nvmf/common.sh@123 -- # set -e 00:16:13.825 17:26:22 -- nvmf/common.sh@124 -- # return 0 00:16:13.825 17:26:22 -- nvmf/common.sh@477 -- # '[' -n 3132809 ']' 00:16:13.825 17:26:22 -- nvmf/common.sh@478 -- # killprocess 3132809 00:16:13.825 17:26:22 -- common/autotest_common.sh@926 -- # '[' -z 3132809 ']' 00:16:13.825 17:26:22 -- common/autotest_common.sh@930 -- # kill -0 3132809 00:16:13.825 17:26:22 -- common/autotest_common.sh@931 -- # uname 00:16:13.825 17:26:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:13.825 17:26:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3132809 00:16:13.825 17:26:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:13.825 17:26:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:13.825 17:26:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3132809' 00:16:13.825 killing process with pid 3132809 00:16:13.825 17:26:22 -- common/autotest_common.sh@945 -- # kill 3132809 00:16:13.825 17:26:22 -- common/autotest_common.sh@950 -- # wait 3132809 00:16:13.825 17:26:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.825 17:26:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.825 17:26:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.825 17:26:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.825 17:26:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.825 17:26:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.825 17:26:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.825 17:26:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.373 17:26:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:16.373 00:16:16.373 real 0m17.647s 00:16:16.373 user 0m29.737s 00:16:16.373 sys 0m6.629s 00:16:16.373 17:26:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.373 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:16.373 ************************************ 00:16:16.373 END TEST nvmf_delete_subsystem 00:16:16.373 ************************************ 00:16:16.373 17:26:24 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:16.373 17:26:24 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:16.373 17:26:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:16.373 17:26:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.373 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:16.373 ************************************ 00:16:16.373 START TEST nvmf_nvme_cli 00:16:16.373 ************************************ 00:16:16.373 17:26:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:16.373 * Looking for test storage... 00:16:16.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.373 17:26:24 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.373 17:26:24 -- nvmf/common.sh@7 -- # uname -s 00:16:16.373 17:26:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.373 17:26:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.373 17:26:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.373 17:26:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.373 17:26:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.373 17:26:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.373 17:26:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.373 17:26:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.373 17:26:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.373 17:26:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.373 17:26:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.373 17:26:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.373 17:26:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.373 17:26:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.373 17:26:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.373 17:26:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.373 17:26:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.373 17:26:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.373 17:26:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.373 17:26:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.373 17:26:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.373 17:26:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.373 17:26:24 -- paths/export.sh@5 -- # export PATH 00:16:16.373 17:26:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.373 17:26:24 -- nvmf/common.sh@46 -- # : 0 00:16:16.373 17:26:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.373 17:26:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.373 17:26:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.373 17:26:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.373 17:26:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.373 17:26:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.373 17:26:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.373 17:26:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.373 17:26:24 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.373 17:26:24 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.373 17:26:24 -- target/nvme_cli.sh@14 -- # devs=() 00:16:16.373 17:26:24 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:16.373 17:26:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:16.373 17:26:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.373 17:26:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.373 17:26:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.373 17:26:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.373 17:26:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.373 17:26:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.373 17:26:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.373 17:26:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:16.373 17:26:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:16.373 17:26:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:16.373 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.520 17:26:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:24.520 17:26:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:24.520 17:26:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:24.520 17:26:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:24.520 17:26:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:24.520 17:26:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:24.520 17:26:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:24.520 17:26:31 -- nvmf/common.sh@294 -- # net_devs=() 00:16:24.520 17:26:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:24.521 17:26:31 -- nvmf/common.sh@295 -- # e810=() 00:16:24.521 17:26:31 -- nvmf/common.sh@295 -- # local -ga e810 00:16:24.521 17:26:31 -- nvmf/common.sh@296 -- # x722=() 00:16:24.521 17:26:31 -- nvmf/common.sh@296 -- # local -ga x722 00:16:24.521 17:26:31 -- nvmf/common.sh@297 -- # mlx=() 00:16:24.521 17:26:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:24.521 17:26:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.521 17:26:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:24.521 17:26:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:24.521 17:26:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:24.521 17:26:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.521 17:26:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:24.521 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:24.521 17:26:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.521 17:26:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:24.521 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:24.521 17:26:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:24.521 17:26:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.521 17:26:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.521 17:26:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.521 17:26:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.521 17:26:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:24.521 Found net devices under 0000:31:00.0: cvl_0_0 00:16:24.521 17:26:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.521 17:26:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.521 17:26:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.521 17:26:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.521 17:26:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.521 17:26:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:24.521 Found net devices under 0000:31:00.1: cvl_0_1 00:16:24.521 17:26:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.521 17:26:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:24.521 17:26:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:24.521 17:26:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:24.521 17:26:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.521 17:26:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.521 17:26:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.521 17:26:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:24.521 17:26:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.521 17:26:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.521 17:26:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:24.521 17:26:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.521 17:26:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.521 17:26:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:24.521 17:26:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:24.521 17:26:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.521 17:26:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.521 17:26:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.521 17:26:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.521 17:26:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:24.521 17:26:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.521 17:26:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.521 17:26:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.521 17:26:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:24.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:16:24.521 00:16:24.521 --- 10.0.0.2 ping statistics --- 00:16:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.521 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:16:24.521 17:26:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:16:24.521 00:16:24.521 --- 10.0.0.1 ping statistics --- 00:16:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.521 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:16:24.521 17:26:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.521 17:26:31 -- nvmf/common.sh@410 -- # return 0 00:16:24.521 17:26:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:24.521 17:26:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.521 17:26:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:24.521 17:26:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.521 17:26:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:24.521 17:26:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:24.521 17:26:31 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:24.521 17:26:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:24.521 17:26:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:24.521 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 17:26:31 -- nvmf/common.sh@469 -- # nvmfpid=3138645 00:16:24.521 17:26:31 -- nvmf/common.sh@470 -- # waitforlisten 3138645 00:16:24.521 17:26:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.521 17:26:31 -- common/autotest_common.sh@819 -- # '[' -z 3138645 ']' 00:16:24.521 17:26:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.521 17:26:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:24.521 17:26:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.521 17:26:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:24.521 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 [2024-10-13 17:26:31.932730] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:24.521 [2024-10-13 17:26:31.932807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.521 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.521 [2024-10-13 17:26:32.011278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.521 [2024-10-13 17:26:32.050426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.521 [2024-10-13 17:26:32.050565] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.521 [2024-10-13 17:26:32.050576] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.521 [2024-10-13 17:26:32.050584] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.521 [2024-10-13 17:26:32.050736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.521 [2024-10-13 17:26:32.050860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.521 [2024-10-13 17:26:32.051020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.521 [2024-10-13 17:26:32.051021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.521 17:26:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:24.521 17:26:32 -- common/autotest_common.sh@852 -- # return 0 00:16:24.521 17:26:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.521 17:26:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:24.521 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 17:26:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.521 17:26:32 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.521 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.521 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 [2024-10-13 17:26:32.771419] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.521 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.521 17:26:32 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.521 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.521 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 Malloc0 00:16:24.521 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.521 17:26:32 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:24.521 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.521 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 Malloc1 00:16:24.521 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.521 17:26:32 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:24.521 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.521 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.521 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 17:26:32 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.522 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 17:26:32 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.522 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 17:26:32 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.522 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 [2024-10-13 17:26:32.857121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.522 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 17:26:32 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:24.522 17:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 17:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 17:26:32 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:24.522 00:16:24.522 Discovery Log Number of Records 2, Generation counter 2 00:16:24.522 =====Discovery Log Entry 0====== 00:16:24.522 trtype: tcp 00:16:24.522 adrfam: ipv4 00:16:24.522 subtype: current discovery subsystem 00:16:24.522 treq: not required 00:16:24.522 portid: 0 00:16:24.522 trsvcid: 4420 00:16:24.522 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:24.522 traddr: 10.0.0.2 00:16:24.522 eflags: explicit discovery connections, duplicate discovery information 00:16:24.522 sectype: none 00:16:24.522 =====Discovery Log Entry 1====== 00:16:24.522 trtype: tcp 00:16:24.522 adrfam: ipv4 00:16:24.522 subtype: nvme subsystem 00:16:24.522 treq: not required 00:16:24.522 portid: 0 00:16:24.522 trsvcid: 4420 00:16:24.522 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:24.522 traddr: 10.0.0.2 00:16:24.522 eflags: none 00:16:24.522 sectype: none 00:16:24.522 17:26:33 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:24.522 17:26:33 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:24.522 17:26:33 -- nvmf/common.sh@510 -- # local dev _ 00:16:24.522 17:26:33 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:24.522 17:26:33 -- nvmf/common.sh@509 -- # nvme list 00:16:24.522 17:26:33 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:24.522 17:26:33 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:24.522 17:26:33 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:24.522 17:26:33 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:24.522 17:26:33 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:24.522 17:26:33 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.437 17:26:34 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:26.437 17:26:34 -- common/autotest_common.sh@1177 -- # local i=0 00:16:26.437 17:26:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.437 17:26:34 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:26.437 17:26:34 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:26.437 17:26:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:28.351 17:26:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:28.351 17:26:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:28.351 17:26:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.351 17:26:36 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:28.351 17:26:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.351 17:26:36 -- common/autotest_common.sh@1187 -- # return 0 00:16:28.351 17:26:36 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:28.351 17:26:36 -- nvmf/common.sh@510 -- # local dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@509 -- # nvme list 00:16:28.351 17:26:36 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:28.351 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:28.351 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:28.351 17:26:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:28.351 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:28.351 17:26:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:28.351 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.351 17:26:36 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:28.351 /dev/nvme0n2 ]] 00:16:28.351 17:26:36 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:28.351 17:26:36 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:28.351 17:26:36 -- nvmf/common.sh@510 -- # local dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.351 17:26:36 -- nvmf/common.sh@509 -- # nvme list 00:16:28.612 17:26:36 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:28.612 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.612 17:26:36 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:28.612 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.612 17:26:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:28.612 17:26:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:28.612 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.612 17:26:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:28.612 17:26:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:28.612 17:26:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:28.612 17:26:36 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:28.612 17:26:36 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.873 17:26:37 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.873 17:26:37 -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.873 17:26:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:28.873 17:26:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.873 17:26:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:28.873 17:26:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.873 17:26:37 -- common/autotest_common.sh@1210 -- # return 0 00:16:28.873 17:26:37 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:28.873 17:26:37 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.873 17:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.873 17:26:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.873 17:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.873 17:26:37 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:28.873 17:26:37 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:28.873 17:26:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:28.873 17:26:37 -- nvmf/common.sh@116 -- # sync 00:16:28.873 17:26:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:28.873 17:26:37 -- nvmf/common.sh@119 -- # set +e 00:16:28.873 17:26:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:28.873 17:26:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:28.873 rmmod nvme_tcp 00:16:28.873 rmmod nvme_fabrics 00:16:28.873 rmmod nvme_keyring 00:16:28.873 17:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:28.873 17:26:37 -- nvmf/common.sh@123 -- # set -e 00:16:28.873 17:26:37 -- nvmf/common.sh@124 -- # return 0 00:16:28.873 17:26:37 -- nvmf/common.sh@477 -- # '[' -n 3138645 ']' 00:16:28.873 17:26:37 -- nvmf/common.sh@478 -- # killprocess 3138645 00:16:28.873 17:26:37 -- common/autotest_common.sh@926 -- # '[' -z 3138645 ']' 00:16:28.873 17:26:37 -- common/autotest_common.sh@930 -- # kill -0 3138645 00:16:28.873 17:26:37 -- common/autotest_common.sh@931 -- # uname 00:16:28.873 17:26:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.873 17:26:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3138645 00:16:28.873 17:26:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:28.873 17:26:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:28.873 17:26:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3138645' 00:16:28.873 killing process with pid 3138645 00:16:28.873 17:26:37 -- common/autotest_common.sh@945 -- # kill 3138645 00:16:28.873 17:26:37 -- common/autotest_common.sh@950 -- # wait 3138645 00:16:29.134 17:26:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:29.134 17:26:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:29.134 17:26:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:29.134 17:26:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.134 17:26:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:29.134 17:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.134 17:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.134 17:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.677 17:26:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:31.677 00:16:31.677 real 0m15.164s 00:16:31.677 user 0m23.823s 00:16:31.677 sys 0m6.169s 00:16:31.677 17:26:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.677 17:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:31.677 ************************************ 00:16:31.677 END TEST nvmf_nvme_cli 00:16:31.677 ************************************ 00:16:31.677 17:26:39 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:16:31.677 17:26:39 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:31.677 17:26:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:31.677 17:26:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:31.677 17:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:31.677 ************************************ 00:16:31.677 START TEST nvmf_vfio_user 00:16:31.677 ************************************ 00:16:31.677 17:26:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:31.677 * Looking for test storage... 00:16:31.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.677 17:26:39 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.677 17:26:39 -- nvmf/common.sh@7 -- # uname -s 00:16:31.677 17:26:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.677 17:26:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.677 17:26:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.677 17:26:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.677 17:26:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.677 17:26:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.677 17:26:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.677 17:26:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.677 17:26:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.677 17:26:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.677 17:26:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.677 17:26:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.677 17:26:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.677 17:26:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.677 17:26:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.677 17:26:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.677 17:26:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.677 17:26:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.677 17:26:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.677 17:26:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.677 17:26:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.677 17:26:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.677 17:26:39 -- paths/export.sh@5 -- # export PATH 00:16:31.677 17:26:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.677 17:26:39 -- nvmf/common.sh@46 -- # : 0 00:16:31.677 17:26:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:31.677 17:26:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:31.677 17:26:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:31.677 17:26:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.677 17:26:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.677 17:26:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:31.678 17:26:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:31.678 17:26:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3140202 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3140202' 00:16:31.678 Process pid: 3140202 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3140202 00:16:31.678 17:26:39 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:31.678 17:26:39 -- common/autotest_common.sh@819 -- # '[' -z 3140202 ']' 00:16:31.678 17:26:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.678 17:26:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.678 17:26:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.678 17:26:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.678 17:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:31.678 [2024-10-13 17:26:39.803379] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:31.678 [2024-10-13 17:26:39.803431] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.678 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.678 [2024-10-13 17:26:39.861344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.678 [2024-10-13 17:26:39.891878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.678 [2024-10-13 17:26:39.892013] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.678 [2024-10-13 17:26:39.892023] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.678 [2024-10-13 17:26:39.892032] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.678 [2024-10-13 17:26:39.892178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.678 [2024-10-13 17:26:39.892417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.678 [2024-10-13 17:26:39.892436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.678 [2024-10-13 17:26:39.892446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.249 17:26:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.249 17:26:40 -- common/autotest_common.sh@852 -- # return 0 00:16:32.249 17:26:40 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:33.192 17:26:41 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:33.453 17:26:41 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:33.453 17:26:41 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:33.453 17:26:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.453 17:26:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:33.453 17:26:41 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:33.453 Malloc1 00:16:33.453 17:26:41 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:33.714 17:26:42 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:33.975 17:26:42 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:33.975 17:26:42 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.975 17:26:42 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:33.975 17:26:42 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:34.236 Malloc2 00:16:34.236 17:26:42 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:34.496 17:26:42 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:34.497 17:26:43 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:34.757 17:26:43 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:34.757 17:26:43 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:34.757 17:26:43 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:34.757 17:26:43 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:34.757 17:26:43 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:34.757 17:26:43 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:34.757 [2024-10-13 17:26:43.207495] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:34.757 [2024-10-13 17:26:43.207542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140903 ] 00:16:34.757 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.757 [2024-10-13 17:26:43.239665] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:34.757 [2024-10-13 17:26:43.248385] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:34.757 [2024-10-13 17:26:43.248406] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7facc0283000 00:16:34.757 [2024-10-13 17:26:43.249388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.250393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.251397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.252406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.253405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.254416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.255426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.256429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.757 [2024-10-13 17:26:43.257438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:34.757 [2024-10-13 17:26:43.257448] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7facbef8d000 00:16:34.757 [2024-10-13 17:26:43.258773] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:34.757 [2024-10-13 17:26:43.279216] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:34.757 [2024-10-13 17:26:43.279236] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:34.757 [2024-10-13 17:26:43.281589] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:34.757 [2024-10-13 17:26:43.281631] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:34.757 [2024-10-13 17:26:43.281712] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:34.757 [2024-10-13 17:26:43.281732] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:34.757 [2024-10-13 17:26:43.281737] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:35.019 [2024-10-13 17:26:43.282584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:35.019 [2024-10-13 17:26:43.282594] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:35.019 [2024-10-13 17:26:43.282601] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:35.019 [2024-10-13 17:26:43.283591] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:35.019 [2024-10-13 17:26:43.283600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:35.019 [2024-10-13 17:26:43.283607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.019 [2024-10-13 17:26:43.284596] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:35.019 [2024-10-13 17:26:43.284605] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.019 [2024-10-13 17:26:43.285610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:35.019 [2024-10-13 17:26:43.285617] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:35.019 [2024-10-13 17:26:43.285623] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:35.020 [2024-10-13 17:26:43.285629] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.020 [2024-10-13 17:26:43.285735] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:35.020 [2024-10-13 17:26:43.285739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.020 [2024-10-13 17:26:43.285744] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:35.020 [2024-10-13 17:26:43.286613] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:35.020 [2024-10-13 17:26:43.287613] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:35.020 [2024-10-13 17:26:43.288626] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:35.020 [2024-10-13 17:26:43.289649] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.020 [2024-10-13 17:26:43.290635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:35.020 [2024-10-13 17:26:43.290643] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.020 [2024-10-13 17:26:43.290648] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290669] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:35.020 [2024-10-13 17:26:43.290680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290693] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.020 [2024-10-13 17:26:43.290698] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.020 [2024-10-13 17:26:43.290711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.290748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.290757] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:35.020 [2024-10-13 17:26:43.290762] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:35.020 [2024-10-13 17:26:43.290769] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:35.020 [2024-10-13 17:26:43.290774] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:35.020 [2024-10-13 17:26:43.290779] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:35.020 [2024-10-13 17:26:43.290783] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:35.020 [2024-10-13 17:26:43.290788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290798] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.290817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.290829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.020 [2024-10-13 17:26:43.290838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.020 [2024-10-13 17:26:43.290846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.020 [2024-10-13 17:26:43.290854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.020 [2024-10-13 17:26:43.290859] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290868] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.290886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.290892] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:35.020 [2024-10-13 17:26:43.290897] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290911] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.290927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.290986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.290994] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291001] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:35.020 [2024-10-13 17:26:43.291009] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:35.020 [2024-10-13 17:26:43.291016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291034] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:35.020 [2024-10-13 17:26:43.291042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291049] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291056] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.020 [2024-10-13 17:26:43.291060] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.020 [2024-10-13 17:26:43.291071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291099] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291107] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291113] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.020 [2024-10-13 17:26:43.291118] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.020 [2024-10-13 17:26:43.291124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291146] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291152] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291166] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291171] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291176] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.020 [2024-10-13 17:26:43.291181] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:35.020 [2024-10-13 17:26:43.291186] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:35.020 [2024-10-13 17:26:43.291204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.020 [2024-10-13 17:26:43.291279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:35.020 [2024-10-13 17:26:43.291289] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:35.020 [2024-10-13 17:26:43.291294] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:35.020 [2024-10-13 17:26:43.291297] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:35.020 [2024-10-13 17:26:43.291301] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:35.020 [2024-10-13 17:26:43.291307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:35.020 [2024-10-13 17:26:43.291315] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:35.020 [2024-10-13 17:26:43.291319] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:35.020 [2024-10-13 17:26:43.291325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:35.021 [2024-10-13 17:26:43.291332] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:35.021 [2024-10-13 17:26:43.291336] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.021 [2024-10-13 17:26:43.291342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.021 [2024-10-13 17:26:43.291350] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:35.021 [2024-10-13 17:26:43.291354] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:35.021 [2024-10-13 17:26:43.291360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:35.021 [2024-10-13 17:26:43.291367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:35.021 [2024-10-13 17:26:43.291380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:35.021 [2024-10-13 17:26:43.291389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:35.021 [2024-10-13 17:26:43.291396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:35.021 ===================================================== 00:16:35.021 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:35.021 ===================================================== 00:16:35.021 Controller Capabilities/Features 00:16:35.021 ================================ 00:16:35.021 Vendor ID: 4e58 00:16:35.021 Subsystem Vendor ID: 4e58 00:16:35.021 Serial Number: SPDK1 00:16:35.021 Model Number: SPDK bdev Controller 00:16:35.021 Firmware Version: 24.01.1 00:16:35.021 Recommended Arb Burst: 6 00:16:35.021 IEEE OUI Identifier: 8d 6b 50 00:16:35.021 Multi-path I/O 00:16:35.021 May have multiple subsystem ports: Yes 00:16:35.021 May have multiple controllers: Yes 00:16:35.021 Associated with SR-IOV VF: No 00:16:35.021 Max Data Transfer Size: 131072 00:16:35.021 Max Number of Namespaces: 32 00:16:35.021 Max Number of I/O Queues: 127 00:16:35.021 NVMe Specification Version (VS): 1.3 00:16:35.021 NVMe Specification Version (Identify): 1.3 00:16:35.021 Maximum Queue Entries: 256 00:16:35.021 Contiguous Queues Required: Yes 00:16:35.021 Arbitration Mechanisms Supported 00:16:35.021 Weighted Round Robin: Not Supported 00:16:35.021 Vendor Specific: Not Supported 00:16:35.021 Reset Timeout: 15000 ms 00:16:35.021 Doorbell Stride: 4 bytes 00:16:35.021 NVM Subsystem Reset: Not Supported 00:16:35.021 Command Sets Supported 00:16:35.021 NVM Command Set: Supported 00:16:35.021 Boot Partition: Not Supported 00:16:35.021 Memory Page Size Minimum: 4096 bytes 00:16:35.021 Memory Page Size Maximum: 4096 bytes 00:16:35.021 Persistent Memory Region: Not Supported 00:16:35.021 Optional Asynchronous Events Supported 00:16:35.021 Namespace Attribute Notices: Supported 00:16:35.021 Firmware Activation Notices: Not Supported 00:16:35.021 ANA Change Notices: Not Supported 00:16:35.021 PLE Aggregate Log Change Notices: Not Supported 00:16:35.021 LBA Status Info Alert Notices: Not Supported 00:16:35.021 EGE Aggregate Log Change Notices: Not Supported 00:16:35.021 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.021 Zone Descriptor Change Notices: Not Supported 00:16:35.021 Discovery Log Change Notices: Not Supported 00:16:35.021 Controller Attributes 00:16:35.021 128-bit Host Identifier: Supported 00:16:35.021 Non-Operational Permissive Mode: Not Supported 00:16:35.021 NVM Sets: Not Supported 00:16:35.021 Read Recovery Levels: Not Supported 00:16:35.021 Endurance Groups: Not Supported 00:16:35.021 Predictable Latency Mode: Not Supported 00:16:35.021 Traffic Based Keep ALive: Not Supported 00:16:35.021 Namespace Granularity: Not Supported 00:16:35.021 SQ Associations: Not Supported 00:16:35.021 UUID List: Not Supported 00:16:35.021 Multi-Domain Subsystem: Not Supported 00:16:35.021 Fixed Capacity Management: Not Supported 00:16:35.021 Variable Capacity Management: Not Supported 00:16:35.021 Delete Endurance Group: Not Supported 00:16:35.021 Delete NVM Set: Not Supported 00:16:35.021 Extended LBA Formats Supported: Not Supported 00:16:35.021 Flexible Data Placement Supported: Not Supported 00:16:35.021 00:16:35.021 Controller Memory Buffer Support 00:16:35.021 ================================ 00:16:35.021 Supported: No 00:16:35.021 00:16:35.021 Persistent Memory Region Support 00:16:35.021 ================================ 00:16:35.021 Supported: No 00:16:35.021 00:16:35.021 Admin Command Set Attributes 00:16:35.021 ============================ 00:16:35.021 Security Send/Receive: Not Supported 00:16:35.021 Format NVM: Not Supported 00:16:35.021 Firmware Activate/Download: Not Supported 00:16:35.021 Namespace Management: Not Supported 00:16:35.021 Device Self-Test: Not Supported 00:16:35.021 Directives: Not Supported 00:16:35.021 NVMe-MI: Not Supported 00:16:35.021 Virtualization Management: Not Supported 00:16:35.021 Doorbell Buffer Config: Not Supported 00:16:35.021 Get LBA Status Capability: Not Supported 00:16:35.021 Command & Feature Lockdown Capability: Not Supported 00:16:35.021 Abort Command Limit: 4 00:16:35.021 Async Event Request Limit: 4 00:16:35.021 Number of Firmware Slots: N/A 00:16:35.021 Firmware Slot 1 Read-Only: N/A 00:16:35.021 Firmware Activation Without Reset: N/A 00:16:35.021 Multiple Update Detection Support: N/A 00:16:35.021 Firmware Update Granularity: No Information Provided 00:16:35.021 Per-Namespace SMART Log: No 00:16:35.021 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.021 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:35.021 Command Effects Log Page: Supported 00:16:35.021 Get Log Page Extended Data: Supported 00:16:35.021 Telemetry Log Pages: Not Supported 00:16:35.021 Persistent Event Log Pages: Not Supported 00:16:35.021 Supported Log Pages Log Page: May Support 00:16:35.021 Commands Supported & Effects Log Page: Not Supported 00:16:35.021 Feature Identifiers & Effects Log Page:May Support 00:16:35.021 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.021 Data Area 4 for Telemetry Log: Not Supported 00:16:35.021 Error Log Page Entries Supported: 128 00:16:35.021 Keep Alive: Supported 00:16:35.021 Keep Alive Granularity: 10000 ms 00:16:35.021 00:16:35.021 NVM Command Set Attributes 00:16:35.021 ========================== 00:16:35.021 Submission Queue Entry Size 00:16:35.021 Max: 64 00:16:35.021 Min: 64 00:16:35.021 Completion Queue Entry Size 00:16:35.021 Max: 16 00:16:35.021 Min: 16 00:16:35.021 Number of Namespaces: 32 00:16:35.021 Compare Command: Supported 00:16:35.021 Write Uncorrectable Command: Not Supported 00:16:35.021 Dataset Management Command: Supported 00:16:35.021 Write Zeroes Command: Supported 00:16:35.021 Set Features Save Field: Not Supported 00:16:35.021 Reservations: Not Supported 00:16:35.021 Timestamp: Not Supported 00:16:35.021 Copy: Supported 00:16:35.021 Volatile Write Cache: Present 00:16:35.021 Atomic Write Unit (Normal): 1 00:16:35.021 Atomic Write Unit (PFail): 1 00:16:35.021 Atomic Compare & Write Unit: 1 00:16:35.021 Fused Compare & Write: Supported 00:16:35.021 Scatter-Gather List 00:16:35.021 SGL Command Set: Supported (Dword aligned) 00:16:35.021 SGL Keyed: Not Supported 00:16:35.021 SGL Bit Bucket Descriptor: Not Supported 00:16:35.021 SGL Metadata Pointer: Not Supported 00:16:35.021 Oversized SGL: Not Supported 00:16:35.021 SGL Metadata Address: Not Supported 00:16:35.021 SGL Offset: Not Supported 00:16:35.021 Transport SGL Data Block: Not Supported 00:16:35.021 Replay Protected Memory Block: Not Supported 00:16:35.021 00:16:35.021 Firmware Slot Information 00:16:35.021 ========================= 00:16:35.021 Active slot: 1 00:16:35.021 Slot 1 Firmware Revision: 24.01.1 00:16:35.021 00:16:35.021 00:16:35.021 Commands Supported and Effects 00:16:35.021 ============================== 00:16:35.021 Admin Commands 00:16:35.021 -------------- 00:16:35.021 Get Log Page (02h): Supported 00:16:35.021 Identify (06h): Supported 00:16:35.021 Abort (08h): Supported 00:16:35.021 Set Features (09h): Supported 00:16:35.021 Get Features (0Ah): Supported 00:16:35.021 Asynchronous Event Request (0Ch): Supported 00:16:35.021 Keep Alive (18h): Supported 00:16:35.021 I/O Commands 00:16:35.021 ------------ 00:16:35.021 Flush (00h): Supported LBA-Change 00:16:35.021 Write (01h): Supported LBA-Change 00:16:35.021 Read (02h): Supported 00:16:35.021 Compare (05h): Supported 00:16:35.021 Write Zeroes (08h): Supported LBA-Change 00:16:35.021 Dataset Management (09h): Supported LBA-Change 00:16:35.021 Copy (19h): Supported LBA-Change 00:16:35.021 Unknown (79h): Supported LBA-Change 00:16:35.021 Unknown (7Ah): Supported 00:16:35.021 00:16:35.021 Error Log 00:16:35.021 ========= 00:16:35.021 00:16:35.021 Arbitration 00:16:35.021 =========== 00:16:35.021 Arbitration Burst: 1 00:16:35.021 00:16:35.021 Power Management 00:16:35.021 ================ 00:16:35.021 Number of Power States: 1 00:16:35.021 Current Power State: Power State #0 00:16:35.021 Power State #0: 00:16:35.021 Max Power: 0.00 W 00:16:35.021 Non-Operational State: Operational 00:16:35.021 Entry Latency: Not Reported 00:16:35.021 Exit Latency: Not Reported 00:16:35.021 Relative Read Throughput: 0 00:16:35.021 Relative Read Latency: 0 00:16:35.021 Relative Write Throughput: 0 00:16:35.022 Relative Write Latency: 0 00:16:35.022 Idle Power: Not Reported 00:16:35.022 Active Power: Not Reported 00:16:35.022 Non-Operational Permissive Mode: Not Supported 00:16:35.022 00:16:35.022 Health Information 00:16:35.022 ================== 00:16:35.022 Critical Warnings: 00:16:35.022 Available Spare Space: OK 00:16:35.022 Temperature: OK 00:16:35.022 Device Reliability: OK 00:16:35.022 Read Only: No 00:16:35.022 Volatile Memory Backup: OK 00:16:35.022 Current Temperature: 0 Kelvin[2024-10-13 17:26:43.291501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:35.022 [2024-10-13 17:26:43.291513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:35.022 [2024-10-13 17:26:43.291539] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:35.022 [2024-10-13 17:26:43.291550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.022 [2024-10-13 17:26:43.291556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.022 [2024-10-13 17:26:43.291563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.022 [2024-10-13 17:26:43.291569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.022 [2024-10-13 17:26:43.294070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:35.022 [2024-10-13 17:26:43.294082] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:35.022 [2024-10-13 17:26:43.294692] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:35.022 [2024-10-13 17:26:43.294698] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:35.022 [2024-10-13 17:26:43.295676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:35.022 [2024-10-13 17:26:43.295687] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:35.022 [2024-10-13 17:26:43.295745] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:35.022 [2024-10-13 17:26:43.297708] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.022 (-273 Celsius) 00:16:35.022 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.022 Available Spare: 0% 00:16:35.022 Available Spare Threshold: 0% 00:16:35.022 Life Percentage Used: 0% 00:16:35.022 Data Units Read: 0 00:16:35.022 Data Units Written: 0 00:16:35.022 Host Read Commands: 0 00:16:35.022 Host Write Commands: 0 00:16:35.022 Controller Busy Time: 0 minutes 00:16:35.022 Power Cycles: 0 00:16:35.022 Power On Hours: 0 hours 00:16:35.022 Unsafe Shutdowns: 0 00:16:35.022 Unrecoverable Media Errors: 0 00:16:35.022 Lifetime Error Log Entries: 0 00:16:35.022 Warning Temperature Time: 0 minutes 00:16:35.022 Critical Temperature Time: 0 minutes 00:16:35.022 00:16:35.022 Number of Queues 00:16:35.022 ================ 00:16:35.022 Number of I/O Submission Queues: 127 00:16:35.022 Number of I/O Completion Queues: 127 00:16:35.022 00:16:35.022 Active Namespaces 00:16:35.022 ================= 00:16:35.022 Namespace ID:1 00:16:35.022 Error Recovery Timeout: Unlimited 00:16:35.022 Command Set Identifier: NVM (00h) 00:16:35.022 Deallocate: Supported 00:16:35.022 Deallocated/Unwritten Error: Not Supported 00:16:35.022 Deallocated Read Value: Unknown 00:16:35.022 Deallocate in Write Zeroes: Not Supported 00:16:35.022 Deallocated Guard Field: 0xFFFF 00:16:35.022 Flush: Supported 00:16:35.022 Reservation: Supported 00:16:35.022 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.022 Size (in LBAs): 131072 (0GiB) 00:16:35.022 Capacity (in LBAs): 131072 (0GiB) 00:16:35.022 Utilization (in LBAs): 131072 (0GiB) 00:16:35.022 NGUID: 12025160BD1043CE955E9A49D21496FC 00:16:35.022 UUID: 12025160-bd10-43ce-955e-9a49d21496fc 00:16:35.022 Thin Provisioning: Not Supported 00:16:35.022 Per-NS Atomic Units: Yes 00:16:35.022 Atomic Boundary Size (Normal): 0 00:16:35.022 Atomic Boundary Size (PFail): 0 00:16:35.022 Atomic Boundary Offset: 0 00:16:35.022 Maximum Single Source Range Length: 65535 00:16:35.022 Maximum Copy Length: 65535 00:16:35.022 Maximum Source Range Count: 1 00:16:35.022 NGUID/EUI64 Never Reused: No 00:16:35.022 Namespace Write Protected: No 00:16:35.022 Number of LBA Formats: 1 00:16:35.022 Current LBA Format: LBA Format #00 00:16:35.022 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.022 00:16:35.022 17:26:43 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:35.022 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.310 Initializing NVMe Controllers 00:16:40.310 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:40.310 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:40.310 Initialization complete. Launching workers. 00:16:40.310 ======================================================== 00:16:40.310 Latency(us) 00:16:40.310 Device Information : IOPS MiB/s Average min max 00:16:40.310 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39998.02 156.24 3200.03 843.00 8811.53 00:16:40.310 ======================================================== 00:16:40.310 Total : 39998.02 156.24 3200.03 843.00 8811.53 00:16:40.310 00:16:40.310 17:26:48 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:40.310 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.600 Initializing NVMe Controllers 00:16:45.600 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:45.600 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:45.600 Initialization complete. Launching workers. 00:16:45.600 ======================================================== 00:16:45.600 Latency(us) 00:16:45.600 Device Information : IOPS MiB/s Average min max 00:16:45.600 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.11 62.73 7976.12 5985.64 8980.62 00:16:45.600 ======================================================== 00:16:45.600 Total : 16059.11 62.73 7976.12 5985.64 8980.62 00:16:45.600 00:16:45.600 17:26:53 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:45.600 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.017 Initializing NVMe Controllers 00:16:51.017 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:51.017 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:51.017 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:51.017 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:51.017 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:51.017 Initialization complete. Launching workers. 00:16:51.017 Starting thread on core 2 00:16:51.017 Starting thread on core 3 00:16:51.017 Starting thread on core 1 00:16:51.017 17:26:59 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:51.017 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.315 Initializing NVMe Controllers 00:16:54.315 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:54.315 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:54.315 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:54.315 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:54.315 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:54.315 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:54.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:54.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:54.315 Initialization complete. Launching workers. 00:16:54.315 Starting thread on core 1 with urgent priority queue 00:16:54.315 Starting thread on core 2 with urgent priority queue 00:16:54.315 Starting thread on core 3 with urgent priority queue 00:16:54.315 Starting thread on core 0 with urgent priority queue 00:16:54.315 SPDK bdev Controller (SPDK1 ) core 0: 8083.00 IO/s 12.37 secs/100000 ios 00:16:54.315 SPDK bdev Controller (SPDK1 ) core 1: 12769.00 IO/s 7.83 secs/100000 ios 00:16:54.315 SPDK bdev Controller (SPDK1 ) core 2: 9460.33 IO/s 10.57 secs/100000 ios 00:16:54.315 SPDK bdev Controller (SPDK1 ) core 3: 11423.33 IO/s 8.75 secs/100000 ios 00:16:54.315 ======================================================== 00:16:54.315 00:16:54.315 17:27:02 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:54.315 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.315 Initializing NVMe Controllers 00:16:54.315 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:54.315 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:54.315 Namespace ID: 1 size: 0GB 00:16:54.315 Initialization complete. 00:16:54.315 INFO: using host memory buffer for IO 00:16:54.315 Hello world! 00:16:54.315 17:27:02 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:54.315 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.701 Initializing NVMe Controllers 00:16:55.701 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:55.701 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:55.701 Initialization complete. Launching workers. 00:16:55.701 submit (in ns) avg, min, max = 7813.0, 3864.2, 3999347.5 00:16:55.701 complete (in ns) avg, min, max = 18727.0, 2375.0, 5992591.7 00:16:55.701 00:16:55.701 Submit histogram 00:16:55.701 ================ 00:16:55.701 Range in us Cumulative Count 00:16:55.701 3.840 - 3.867: 0.0105% ( 2) 00:16:55.701 3.867 - 3.893: 1.6586% ( 313) 00:16:55.701 3.893 - 3.920: 6.5027% ( 920) 00:16:55.701 3.920 - 3.947: 16.5543% ( 1909) 00:16:55.701 3.947 - 3.973: 29.0227% ( 2368) 00:16:55.701 3.973 - 4.000: 40.8751% ( 2251) 00:16:55.701 4.000 - 4.027: 54.2965% ( 2549) 00:16:55.702 4.027 - 4.053: 70.2190% ( 3024) 00:16:55.702 4.053 - 4.080: 84.8357% ( 2776) 00:16:55.702 4.080 - 4.107: 92.7390% ( 1501) 00:16:55.702 4.107 - 4.133: 96.9987% ( 809) 00:16:55.702 4.133 - 4.160: 98.7100% ( 325) 00:16:55.702 4.160 - 4.187: 99.3208% ( 116) 00:16:55.702 4.187 - 4.213: 99.4471% ( 24) 00:16:55.702 4.213 - 4.240: 99.4629% ( 3) 00:16:55.702 4.320 - 4.347: 99.4787% ( 3) 00:16:55.702 4.347 - 4.373: 99.4840% ( 1) 00:16:55.702 4.533 - 4.560: 99.4893% ( 1) 00:16:55.702 4.747 - 4.773: 99.4998% ( 2) 00:16:55.702 4.773 - 4.800: 99.5051% ( 1) 00:16:55.702 4.960 - 4.987: 99.5103% ( 1) 00:16:55.702 4.987 - 5.013: 99.5156% ( 1) 00:16:55.702 5.013 - 5.040: 99.5209% ( 1) 00:16:55.702 5.040 - 5.067: 99.5261% ( 1) 00:16:55.702 5.120 - 5.147: 99.5314% ( 1) 00:16:55.702 5.173 - 5.200: 99.5366% ( 1) 00:16:55.702 5.467 - 5.493: 99.5419% ( 1) 00:16:55.702 5.653 - 5.680: 99.5472% ( 1) 00:16:55.702 5.920 - 5.947: 99.5577% ( 2) 00:16:55.702 6.000 - 6.027: 99.5630% ( 1) 00:16:55.702 6.027 - 6.053: 99.5840% ( 4) 00:16:55.702 6.080 - 6.107: 99.5946% ( 2) 00:16:55.702 6.107 - 6.133: 99.6156% ( 4) 00:16:55.702 6.187 - 6.213: 99.6209% ( 1) 00:16:55.702 6.213 - 6.240: 99.6262% ( 1) 00:16:55.702 6.240 - 6.267: 99.6314% ( 1) 00:16:55.702 6.293 - 6.320: 99.6420% ( 2) 00:16:55.702 6.347 - 6.373: 99.6472% ( 1) 00:16:55.702 6.400 - 6.427: 99.6525% ( 1) 00:16:55.702 6.480 - 6.507: 99.6578% ( 1) 00:16:55.702 6.507 - 6.533: 99.6683% ( 2) 00:16:55.702 6.533 - 6.560: 99.6788% ( 2) 00:16:55.702 6.560 - 6.587: 99.6893% ( 2) 00:16:55.702 6.587 - 6.613: 99.7051% ( 3) 00:16:55.702 6.640 - 6.667: 99.7157% ( 2) 00:16:55.702 6.693 - 6.720: 99.7262% ( 2) 00:16:55.702 6.747 - 6.773: 99.7367% ( 2) 00:16:55.702 6.800 - 6.827: 99.7420% ( 1) 00:16:55.702 6.827 - 6.880: 99.7473% ( 1) 00:16:55.702 6.933 - 6.987: 99.7525% ( 1) 00:16:55.702 6.987 - 7.040: 99.7789% ( 5) 00:16:55.702 7.040 - 7.093: 99.7841% ( 1) 00:16:55.702 7.147 - 7.200: 99.7894% ( 1) 00:16:55.702 7.200 - 7.253: 99.8052% ( 3) 00:16:55.702 7.253 - 7.307: 99.8157% ( 2) 00:16:55.702 7.307 - 7.360: 99.8315% ( 3) 00:16:55.702 7.360 - 7.413: 99.8420% ( 2) 00:16:55.702 7.573 - 7.627: 99.8526% ( 2) 00:16:55.702 7.733 - 7.787: 99.8578% ( 1) 00:16:55.702 7.840 - 7.893: 99.8631% ( 1) 00:16:55.702 7.893 - 7.947: 99.8736% ( 2) 00:16:55.702 8.053 - 8.107: 99.8789% ( 1) 00:16:55.702 8.107 - 8.160: 99.8894% ( 2) 00:16:55.702 8.320 - 8.373: 99.8947% ( 1) 00:16:55.702 12.587 - 12.640: 99.9000% ( 1) 00:16:55.702 46.080 - 46.293: 99.9052% ( 1) 00:16:55.702 3986.773 - 4014.080: 100.0000% ( 18) 00:16:55.702 00:16:55.702 Complete histogram 00:16:55.702 ================== 00:16:55.702 Range in us Cumulative Count 00:16:55.702 2.373 - 2.387: 0.2633% ( 50) 00:16:55.702 2.387 - 2.400: 0.5950% ( 63) 00:16:55.702 2.400 - 2.413: 0.8056% ( 40) 00:16:55.702 2.413 - 2.427: 9.6356% ( 1677) 00:16:55.702 2.427 - 2.440: 53.2224% ( 8278) 00:16:55.702 2.440 - 2.453: 70.8298% ( 3344) 00:16:55.702 2.453 - 2.467: 86.9524% ( 3062) 00:16:55.702 2.467 - 2.480: 94.2344% ( 1383) 00:16:55.702 2.480 - 2.493: 96.0510% ( 345) 00:16:55.702 2.493 - 2.507: 97.2831% ( 234) 00:16:55.702 2.507 - 2.520: 98.3572% ( 204) 00:16:55.702 2.520 - 2.533: 98.8837% ( 100) 00:16:55.702 2.533 - 2.547: 99.2260% ( 65) 00:16:55.702 2.547 - 2.560: 99.3102% ( 16) 00:16:55.702 2.560 - 2.573: 99.3471% ( 7) 00:16:55.702 2.800 - 2.813: 99.3524% ( 1) 00:16:55.702 2.960 - 2.973: 99.3576% ( 1) 00:16:55.702 3.040 - 3.053: 99.3629% ( 1) 00:16:55.702 3.067 - 3.080: 99.3734% ( 2) 00:16:55.702 3.133 - 3.147: 99.3787% ( 1) 00:16:55.702 4.560 - 4.587: 99.3840% ( 1) 00:16:55.702 4.587 - 4.613: 99.3892% ( 1) 00:16:55.702 4.747 - 4.773: 99.3945% ( 1) 00:16:55.702 4.827 - 4.853: 99.4050% ( 2) 00:16:55.702 4.853 - 4.880: 99.4103% ( 1) 00:16:55.702 4.880 - 4.907: 99.4208% ( 2) 00:16:55.702 4.960 - 4.987: 99.4261% ( 1) 00:16:55.702 5.013 - 5.040: 99.4419% ( 3) 00:16:55.702 5.040 - 5.067: 99.4471% ( 1) 00:16:55.702 5.067 - 5.093: 99.4577% ( 2) 00:16:55.702 5.147 - 5.173: 99.4629% ( 1) 00:16:55.702 5.253 - 5.280: 99.4682% ( 1) 00:16:55.702 5.280 - 5.307: 99.4787% ( 2) 00:16:55.702 5.333 - 5.360: 99.4893% ( 2) 00:16:55.702 5.387 - 5.413: 99.4998% ( 2) 00:16:55.702 5.440 - 5.467: 99.5051% ( 1) 00:16:55.702 5.573 - 5.600: 99.5103% ( 1) 00:16:55.702 5.600 - 5.627: 99.5156% ( 1) 00:16:55.702 5.627 - 5.653: 99.5209% ( 1) 00:16:55.702 5.653 - 5.680: 99.5261% ( 1) 00:16:55.702 5.733 - 5.760: 99.5366% ( 2) 00:16:55.702 6.053 - 6.080: 99.5419% ( 1) 00:16:55.702 6.080 - 6.107: 99.5472% ( 1) 00:16:55.702 6.133 - 6.160: 99.5630% ( 3) 00:16:55.702 6.560 - 6.587: 99.5682% ( 1) 00:16:55.702 7.413 - 7.467: 99.5735% ( 1) 00:16:55.702 10.133 - 10.187: 99.5788% ( 1) 00:16:55.702 13.067 - 13.120: 99.5840% ( 1) 00:16:55.702 14.080 - 14.187: 99.5893% ( 1) 00:16:55.702 2020.693 - 2034.347: 99.5946% ( 1) 00:16:55.702 2034.347 - 2048.000: 99.5998% ( 1) 00:16:55.702 2430.293 - 2443.947: 99.6051% ( 1) 00:16:55.702 3986.773 - 4014.080: 99.9895% ( 73) 00:16:55.702 4969.813 - 4997.120: 99.9947% ( 1) 00:16:55.702 5980.160 - 6007.467: 100.0000% ( 1) 00:16:55.702 00:16:55.702 17:27:03 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:55.702 17:27:03 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:55.702 17:27:03 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:55.702 17:27:03 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:55.702 17:27:03 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:55.702 [2024-10-13 17:27:04.087008] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:55.702 [ 00:16:55.702 { 00:16:55.702 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:55.702 "subtype": "Discovery", 00:16:55.702 "listen_addresses": [], 00:16:55.702 "allow_any_host": true, 00:16:55.702 "hosts": [] 00:16:55.702 }, 00:16:55.702 { 00:16:55.702 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:55.702 "subtype": "NVMe", 00:16:55.702 "listen_addresses": [ 00:16:55.702 { 00:16:55.702 "transport": "VFIOUSER", 00:16:55.702 "trtype": "VFIOUSER", 00:16:55.702 "adrfam": "IPv4", 00:16:55.702 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:55.702 "trsvcid": "0" 00:16:55.702 } 00:16:55.702 ], 00:16:55.702 "allow_any_host": true, 00:16:55.702 "hosts": [], 00:16:55.702 "serial_number": "SPDK1", 00:16:55.702 "model_number": "SPDK bdev Controller", 00:16:55.702 "max_namespaces": 32, 00:16:55.702 "min_cntlid": 1, 00:16:55.702 "max_cntlid": 65519, 00:16:55.702 "namespaces": [ 00:16:55.702 { 00:16:55.702 "nsid": 1, 00:16:55.702 "bdev_name": "Malloc1", 00:16:55.702 "name": "Malloc1", 00:16:55.702 "nguid": "12025160BD1043CE955E9A49D21496FC", 00:16:55.702 "uuid": "12025160-bd10-43ce-955e-9a49d21496fc" 00:16:55.702 } 00:16:55.702 ] 00:16:55.702 }, 00:16:55.702 { 00:16:55.702 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:55.702 "subtype": "NVMe", 00:16:55.702 "listen_addresses": [ 00:16:55.702 { 00:16:55.702 "transport": "VFIOUSER", 00:16:55.702 "trtype": "VFIOUSER", 00:16:55.702 "adrfam": "IPv4", 00:16:55.702 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:55.702 "trsvcid": "0" 00:16:55.702 } 00:16:55.702 ], 00:16:55.702 "allow_any_host": true, 00:16:55.702 "hosts": [], 00:16:55.702 "serial_number": "SPDK2", 00:16:55.702 "model_number": "SPDK bdev Controller", 00:16:55.702 "max_namespaces": 32, 00:16:55.702 "min_cntlid": 1, 00:16:55.702 "max_cntlid": 65519, 00:16:55.702 "namespaces": [ 00:16:55.702 { 00:16:55.702 "nsid": 1, 00:16:55.702 "bdev_name": "Malloc2", 00:16:55.702 "name": "Malloc2", 00:16:55.702 "nguid": "F28A5324F99A49EB9759968707300154", 00:16:55.702 "uuid": "f28a5324-f99a-49eb-9759-968707300154" 00:16:55.702 } 00:16:55.702 ] 00:16:55.702 } 00:16:55.702 ] 00:16:55.702 17:27:04 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:55.702 17:27:04 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3145096 00:16:55.702 17:27:04 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:55.702 17:27:04 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:55.702 17:27:04 -- common/autotest_common.sh@1244 -- # local i=0 00:16:55.702 17:27:04 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:55.702 17:27:04 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:55.702 17:27:04 -- common/autotest_common.sh@1255 -- # return 0 00:16:55.702 17:27:04 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:55.702 17:27:04 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:55.702 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.964 Malloc3 00:16:55.964 17:27:04 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:55.964 17:27:04 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:56.225 Asynchronous Event Request test 00:16:56.225 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:56.225 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:56.225 Registering asynchronous event callbacks... 00:16:56.225 Starting namespace attribute notice tests for all controllers... 00:16:56.225 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:56.225 aer_cb - Changed Namespace 00:16:56.225 Cleaning up... 00:16:56.225 [ 00:16:56.225 { 00:16:56.225 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:56.225 "subtype": "Discovery", 00:16:56.225 "listen_addresses": [], 00:16:56.225 "allow_any_host": true, 00:16:56.225 "hosts": [] 00:16:56.225 }, 00:16:56.225 { 00:16:56.225 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:56.225 "subtype": "NVMe", 00:16:56.225 "listen_addresses": [ 00:16:56.225 { 00:16:56.225 "transport": "VFIOUSER", 00:16:56.225 "trtype": "VFIOUSER", 00:16:56.225 "adrfam": "IPv4", 00:16:56.225 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:56.225 "trsvcid": "0" 00:16:56.225 } 00:16:56.225 ], 00:16:56.225 "allow_any_host": true, 00:16:56.225 "hosts": [], 00:16:56.225 "serial_number": "SPDK1", 00:16:56.225 "model_number": "SPDK bdev Controller", 00:16:56.225 "max_namespaces": 32, 00:16:56.225 "min_cntlid": 1, 00:16:56.225 "max_cntlid": 65519, 00:16:56.225 "namespaces": [ 00:16:56.225 { 00:16:56.225 "nsid": 1, 00:16:56.225 "bdev_name": "Malloc1", 00:16:56.225 "name": "Malloc1", 00:16:56.225 "nguid": "12025160BD1043CE955E9A49D21496FC", 00:16:56.225 "uuid": "12025160-bd10-43ce-955e-9a49d21496fc" 00:16:56.225 }, 00:16:56.225 { 00:16:56.225 "nsid": 2, 00:16:56.225 "bdev_name": "Malloc3", 00:16:56.225 "name": "Malloc3", 00:16:56.225 "nguid": "E74C6FA6F82442A991DD3D812C24038B", 00:16:56.225 "uuid": "e74c6fa6-f824-42a9-91dd-3d812c24038b" 00:16:56.225 } 00:16:56.225 ] 00:16:56.225 }, 00:16:56.225 { 00:16:56.225 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:56.225 "subtype": "NVMe", 00:16:56.225 "listen_addresses": [ 00:16:56.225 { 00:16:56.225 "transport": "VFIOUSER", 00:16:56.225 "trtype": "VFIOUSER", 00:16:56.225 "adrfam": "IPv4", 00:16:56.225 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:56.225 "trsvcid": "0" 00:16:56.225 } 00:16:56.225 ], 00:16:56.225 "allow_any_host": true, 00:16:56.225 "hosts": [], 00:16:56.225 "serial_number": "SPDK2", 00:16:56.225 "model_number": "SPDK bdev Controller", 00:16:56.225 "max_namespaces": 32, 00:16:56.225 "min_cntlid": 1, 00:16:56.225 "max_cntlid": 65519, 00:16:56.225 "namespaces": [ 00:16:56.225 { 00:16:56.225 "nsid": 1, 00:16:56.225 "bdev_name": "Malloc2", 00:16:56.225 "name": "Malloc2", 00:16:56.225 "nguid": "F28A5324F99A49EB9759968707300154", 00:16:56.225 "uuid": "f28a5324-f99a-49eb-9759-968707300154" 00:16:56.225 } 00:16:56.225 ] 00:16:56.225 } 00:16:56.225 ] 00:16:56.225 17:27:04 -- target/nvmf_vfio_user.sh@44 -- # wait 3145096 00:16:56.225 17:27:04 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:56.225 17:27:04 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:56.225 17:27:04 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:56.225 17:27:04 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:56.225 [2024-10-13 17:27:04.685017] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:56.225 [2024-10-13 17:27:04.685080] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145346 ] 00:16:56.225 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.225 [2024-10-13 17:27:04.716043] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:56.225 [2024-10-13 17:27:04.722743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:56.225 [2024-10-13 17:27:04.722765] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8f5bfc7000 00:16:56.225 [2024-10-13 17:27:04.723741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.724745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.725751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.726751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.727758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.728765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.729773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.730783] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:56.225 [2024-10-13 17:27:04.731792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:56.225 [2024-10-13 17:27:04.731810] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8f5acd1000 00:16:56.225 [2024-10-13 17:27:04.733149] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:56.488 [2024-10-13 17:27:04.753371] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:56.488 [2024-10-13 17:27:04.753394] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:56.488 [2024-10-13 17:27:04.755455] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:56.488 [2024-10-13 17:27:04.755499] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:56.488 [2024-10-13 17:27:04.755579] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:56.488 [2024-10-13 17:27:04.755596] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:56.488 [2024-10-13 17:27:04.755601] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:56.488 [2024-10-13 17:27:04.756463] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:56.489 [2024-10-13 17:27:04.756473] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:56.489 [2024-10-13 17:27:04.756480] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:56.489 [2024-10-13 17:27:04.757465] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:56.489 [2024-10-13 17:27:04.757475] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:56.489 [2024-10-13 17:27:04.757483] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:56.489 [2024-10-13 17:27:04.758473] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:56.489 [2024-10-13 17:27:04.758482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:56.489 [2024-10-13 17:27:04.759480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:56.489 [2024-10-13 17:27:04.759488] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:56.489 [2024-10-13 17:27:04.759493] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:56.489 [2024-10-13 17:27:04.759500] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:56.489 [2024-10-13 17:27:04.759605] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:56.489 [2024-10-13 17:27:04.759611] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:56.489 [2024-10-13 17:27:04.759616] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:56.489 [2024-10-13 17:27:04.760491] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:56.489 [2024-10-13 17:27:04.761500] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:56.489 [2024-10-13 17:27:04.762510] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:56.489 [2024-10-13 17:27:04.763535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:56.489 [2024-10-13 17:27:04.764523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:56.489 [2024-10-13 17:27:04.764532] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:56.489 [2024-10-13 17:27:04.764537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.764558] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:56.489 [2024-10-13 17:27:04.764566] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.764577] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:56.489 [2024-10-13 17:27:04.764582] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:56.489 [2024-10-13 17:27:04.764594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.775071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.775083] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:56.489 [2024-10-13 17:27:04.775088] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:56.489 [2024-10-13 17:27:04.775092] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:56.489 [2024-10-13 17:27:04.775097] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:56.489 [2024-10-13 17:27:04.775102] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:56.489 [2024-10-13 17:27:04.775107] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:56.489 [2024-10-13 17:27:04.775112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.775122] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.775132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.783069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.783085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.489 [2024-10-13 17:27:04.783094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.489 [2024-10-13 17:27:04.783102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.489 [2024-10-13 17:27:04.783113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.489 [2024-10-13 17:27:04.783118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.783127] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.783136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.791068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.791076] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:56.489 [2024-10-13 17:27:04.791081] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.791088] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.791096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.791105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.799068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.799131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.799148] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.799156] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:56.489 [2024-10-13 17:27:04.799160] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:56.489 [2024-10-13 17:27:04.799167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.807083] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:56.489 [2024-10-13 17:27:04.807094] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.807102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.807109] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:56.489 [2024-10-13 17:27:04.807113] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:56.489 [2024-10-13 17:27:04.807120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.815067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.815080] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.815090] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.815097] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:56.489 [2024-10-13 17:27:04.815102] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:56.489 [2024-10-13 17:27:04.815108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.823067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.823076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.823083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.823091] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.823097] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.823102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.823107] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:56.489 [2024-10-13 17:27:04.823112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:56.489 [2024-10-13 17:27:04.823117] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:56.489 [2024-10-13 17:27:04.823133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.831068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:56.489 [2024-10-13 17:27:04.831082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:56.489 [2024-10-13 17:27:04.839068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:56.490 [2024-10-13 17:27:04.839082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:56.490 [2024-10-13 17:27:04.847069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:56.490 [2024-10-13 17:27:04.847082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:56.490 [2024-10-13 17:27:04.855067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:56.490 [2024-10-13 17:27:04.855081] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:56.490 [2024-10-13 17:27:04.855085] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:56.490 [2024-10-13 17:27:04.855089] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:56.490 [2024-10-13 17:27:04.855092] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:56.490 [2024-10-13 17:27:04.855099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:56.490 [2024-10-13 17:27:04.855109] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:56.490 [2024-10-13 17:27:04.855114] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:56.490 [2024-10-13 17:27:04.855120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:56.490 [2024-10-13 17:27:04.855127] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:56.490 [2024-10-13 17:27:04.855131] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:56.490 [2024-10-13 17:27:04.855137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:56.490 [2024-10-13 17:27:04.855145] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:56.490 [2024-10-13 17:27:04.855149] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:56.490 [2024-10-13 17:27:04.855155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:56.490 [2024-10-13 17:27:04.863067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:56.490 [2024-10-13 17:27:04.863084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:56.490 [2024-10-13 17:27:04.863093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:56.490 [2024-10-13 17:27:04.863100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:56.490 ===================================================== 00:16:56.490 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:56.490 ===================================================== 00:16:56.490 Controller Capabilities/Features 00:16:56.490 ================================ 00:16:56.490 Vendor ID: 4e58 00:16:56.490 Subsystem Vendor ID: 4e58 00:16:56.490 Serial Number: SPDK2 00:16:56.490 Model Number: SPDK bdev Controller 00:16:56.490 Firmware Version: 24.01.1 00:16:56.490 Recommended Arb Burst: 6 00:16:56.490 IEEE OUI Identifier: 8d 6b 50 00:16:56.490 Multi-path I/O 00:16:56.490 May have multiple subsystem ports: Yes 00:16:56.490 May have multiple controllers: Yes 00:16:56.490 Associated with SR-IOV VF: No 00:16:56.490 Max Data Transfer Size: 131072 00:16:56.490 Max Number of Namespaces: 32 00:16:56.490 Max Number of I/O Queues: 127 00:16:56.490 NVMe Specification Version (VS): 1.3 00:16:56.490 NVMe Specification Version (Identify): 1.3 00:16:56.490 Maximum Queue Entries: 256 00:16:56.490 Contiguous Queues Required: Yes 00:16:56.490 Arbitration Mechanisms Supported 00:16:56.490 Weighted Round Robin: Not Supported 00:16:56.490 Vendor Specific: Not Supported 00:16:56.490 Reset Timeout: 15000 ms 00:16:56.490 Doorbell Stride: 4 bytes 00:16:56.490 NVM Subsystem Reset: Not Supported 00:16:56.490 Command Sets Supported 00:16:56.490 NVM Command Set: Supported 00:16:56.490 Boot Partition: Not Supported 00:16:56.490 Memory Page Size Minimum: 4096 bytes 00:16:56.490 Memory Page Size Maximum: 4096 bytes 00:16:56.490 Persistent Memory Region: Not Supported 00:16:56.490 Optional Asynchronous Events Supported 00:16:56.490 Namespace Attribute Notices: Supported 00:16:56.490 Firmware Activation Notices: Not Supported 00:16:56.490 ANA Change Notices: Not Supported 00:16:56.490 PLE Aggregate Log Change Notices: Not Supported 00:16:56.490 LBA Status Info Alert Notices: Not Supported 00:16:56.490 EGE Aggregate Log Change Notices: Not Supported 00:16:56.490 Normal NVM Subsystem Shutdown event: Not Supported 00:16:56.490 Zone Descriptor Change Notices: Not Supported 00:16:56.490 Discovery Log Change Notices: Not Supported 00:16:56.490 Controller Attributes 00:16:56.490 128-bit Host Identifier: Supported 00:16:56.490 Non-Operational Permissive Mode: Not Supported 00:16:56.490 NVM Sets: Not Supported 00:16:56.490 Read Recovery Levels: Not Supported 00:16:56.490 Endurance Groups: Not Supported 00:16:56.490 Predictable Latency Mode: Not Supported 00:16:56.490 Traffic Based Keep ALive: Not Supported 00:16:56.490 Namespace Granularity: Not Supported 00:16:56.490 SQ Associations: Not Supported 00:16:56.490 UUID List: Not Supported 00:16:56.490 Multi-Domain Subsystem: Not Supported 00:16:56.490 Fixed Capacity Management: Not Supported 00:16:56.490 Variable Capacity Management: Not Supported 00:16:56.490 Delete Endurance Group: Not Supported 00:16:56.490 Delete NVM Set: Not Supported 00:16:56.490 Extended LBA Formats Supported: Not Supported 00:16:56.490 Flexible Data Placement Supported: Not Supported 00:16:56.490 00:16:56.490 Controller Memory Buffer Support 00:16:56.490 ================================ 00:16:56.490 Supported: No 00:16:56.490 00:16:56.490 Persistent Memory Region Support 00:16:56.490 ================================ 00:16:56.490 Supported: No 00:16:56.490 00:16:56.490 Admin Command Set Attributes 00:16:56.490 ============================ 00:16:56.490 Security Send/Receive: Not Supported 00:16:56.490 Format NVM: Not Supported 00:16:56.490 Firmware Activate/Download: Not Supported 00:16:56.490 Namespace Management: Not Supported 00:16:56.490 Device Self-Test: Not Supported 00:16:56.490 Directives: Not Supported 00:16:56.490 NVMe-MI: Not Supported 00:16:56.490 Virtualization Management: Not Supported 00:16:56.490 Doorbell Buffer Config: Not Supported 00:16:56.490 Get LBA Status Capability: Not Supported 00:16:56.490 Command & Feature Lockdown Capability: Not Supported 00:16:56.490 Abort Command Limit: 4 00:16:56.490 Async Event Request Limit: 4 00:16:56.490 Number of Firmware Slots: N/A 00:16:56.490 Firmware Slot 1 Read-Only: N/A 00:16:56.490 Firmware Activation Without Reset: N/A 00:16:56.490 Multiple Update Detection Support: N/A 00:16:56.490 Firmware Update Granularity: No Information Provided 00:16:56.490 Per-Namespace SMART Log: No 00:16:56.490 Asymmetric Namespace Access Log Page: Not Supported 00:16:56.490 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:56.490 Command Effects Log Page: Supported 00:16:56.490 Get Log Page Extended Data: Supported 00:16:56.490 Telemetry Log Pages: Not Supported 00:16:56.490 Persistent Event Log Pages: Not Supported 00:16:56.490 Supported Log Pages Log Page: May Support 00:16:56.490 Commands Supported & Effects Log Page: Not Supported 00:16:56.490 Feature Identifiers & Effects Log Page:May Support 00:16:56.490 NVMe-MI Commands & Effects Log Page: May Support 00:16:56.490 Data Area 4 for Telemetry Log: Not Supported 00:16:56.490 Error Log Page Entries Supported: 128 00:16:56.490 Keep Alive: Supported 00:16:56.490 Keep Alive Granularity: 10000 ms 00:16:56.490 00:16:56.490 NVM Command Set Attributes 00:16:56.490 ========================== 00:16:56.490 Submission Queue Entry Size 00:16:56.490 Max: 64 00:16:56.490 Min: 64 00:16:56.490 Completion Queue Entry Size 00:16:56.490 Max: 16 00:16:56.490 Min: 16 00:16:56.490 Number of Namespaces: 32 00:16:56.490 Compare Command: Supported 00:16:56.490 Write Uncorrectable Command: Not Supported 00:16:56.490 Dataset Management Command: Supported 00:16:56.490 Write Zeroes Command: Supported 00:16:56.490 Set Features Save Field: Not Supported 00:16:56.490 Reservations: Not Supported 00:16:56.490 Timestamp: Not Supported 00:16:56.490 Copy: Supported 00:16:56.490 Volatile Write Cache: Present 00:16:56.490 Atomic Write Unit (Normal): 1 00:16:56.490 Atomic Write Unit (PFail): 1 00:16:56.490 Atomic Compare & Write Unit: 1 00:16:56.490 Fused Compare & Write: Supported 00:16:56.490 Scatter-Gather List 00:16:56.490 SGL Command Set: Supported (Dword aligned) 00:16:56.490 SGL Keyed: Not Supported 00:16:56.490 SGL Bit Bucket Descriptor: Not Supported 00:16:56.490 SGL Metadata Pointer: Not Supported 00:16:56.490 Oversized SGL: Not Supported 00:16:56.490 SGL Metadata Address: Not Supported 00:16:56.490 SGL Offset: Not Supported 00:16:56.490 Transport SGL Data Block: Not Supported 00:16:56.490 Replay Protected Memory Block: Not Supported 00:16:56.490 00:16:56.490 Firmware Slot Information 00:16:56.490 ========================= 00:16:56.490 Active slot: 1 00:16:56.490 Slot 1 Firmware Revision: 24.01.1 00:16:56.490 00:16:56.490 00:16:56.490 Commands Supported and Effects 00:16:56.490 ============================== 00:16:56.490 Admin Commands 00:16:56.490 -------------- 00:16:56.490 Get Log Page (02h): Supported 00:16:56.490 Identify (06h): Supported 00:16:56.490 Abort (08h): Supported 00:16:56.490 Set Features (09h): Supported 00:16:56.490 Get Features (0Ah): Supported 00:16:56.490 Asynchronous Event Request (0Ch): Supported 00:16:56.490 Keep Alive (18h): Supported 00:16:56.490 I/O Commands 00:16:56.490 ------------ 00:16:56.490 Flush (00h): Supported LBA-Change 00:16:56.491 Write (01h): Supported LBA-Change 00:16:56.491 Read (02h): Supported 00:16:56.491 Compare (05h): Supported 00:16:56.491 Write Zeroes (08h): Supported LBA-Change 00:16:56.491 Dataset Management (09h): Supported LBA-Change 00:16:56.491 Copy (19h): Supported LBA-Change 00:16:56.491 Unknown (79h): Supported LBA-Change 00:16:56.491 Unknown (7Ah): Supported 00:16:56.491 00:16:56.491 Error Log 00:16:56.491 ========= 00:16:56.491 00:16:56.491 Arbitration 00:16:56.491 =========== 00:16:56.491 Arbitration Burst: 1 00:16:56.491 00:16:56.491 Power Management 00:16:56.491 ================ 00:16:56.491 Number of Power States: 1 00:16:56.491 Current Power State: Power State #0 00:16:56.491 Power State #0: 00:16:56.491 Max Power: 0.00 W 00:16:56.491 Non-Operational State: Operational 00:16:56.491 Entry Latency: Not Reported 00:16:56.491 Exit Latency: Not Reported 00:16:56.491 Relative Read Throughput: 0 00:16:56.491 Relative Read Latency: 0 00:16:56.491 Relative Write Throughput: 0 00:16:56.491 Relative Write Latency: 0 00:16:56.491 Idle Power: Not Reported 00:16:56.491 Active Power: Not Reported 00:16:56.491 Non-Operational Permissive Mode: Not Supported 00:16:56.491 00:16:56.491 Health Information 00:16:56.491 ================== 00:16:56.491 Critical Warnings: 00:16:56.491 Available Spare Space: OK 00:16:56.491 Temperature: OK 00:16:56.491 Device Reliability: OK 00:16:56.491 Read Only: No 00:16:56.491 Volatile Memory Backup: OK 00:16:56.491 Current Temperature: 0 Kelvin[2024-10-13 17:27:04.863203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:56.491 [2024-10-13 17:27:04.871113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:56.491 [2024-10-13 17:27:04.871144] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:56.491 [2024-10-13 17:27:04.871153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.491 [2024-10-13 17:27:04.871160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.491 [2024-10-13 17:27:04.871166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.491 [2024-10-13 17:27:04.871172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.491 [2024-10-13 17:27:04.871215] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:56.491 [2024-10-13 17:27:04.871225] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:56.491 [2024-10-13 17:27:04.872248] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:56.491 [2024-10-13 17:27:04.872255] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:56.491 [2024-10-13 17:27:04.873219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:56.491 [2024-10-13 17:27:04.873230] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:56.491 [2024-10-13 17:27:04.873276] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:56.491 [2024-10-13 17:27:04.874659] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:56.491 (-273 Celsius) 00:16:56.491 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:56.491 Available Spare: 0% 00:16:56.491 Available Spare Threshold: 0% 00:16:56.491 Life Percentage Used: 0% 00:16:56.491 Data Units Read: 0 00:16:56.491 Data Units Written: 0 00:16:56.491 Host Read Commands: 0 00:16:56.491 Host Write Commands: 0 00:16:56.491 Controller Busy Time: 0 minutes 00:16:56.491 Power Cycles: 0 00:16:56.491 Power On Hours: 0 hours 00:16:56.491 Unsafe Shutdowns: 0 00:16:56.491 Unrecoverable Media Errors: 0 00:16:56.491 Lifetime Error Log Entries: 0 00:16:56.491 Warning Temperature Time: 0 minutes 00:16:56.491 Critical Temperature Time: 0 minutes 00:16:56.491 00:16:56.491 Number of Queues 00:16:56.491 ================ 00:16:56.491 Number of I/O Submission Queues: 127 00:16:56.491 Number of I/O Completion Queues: 127 00:16:56.491 00:16:56.491 Active Namespaces 00:16:56.491 ================= 00:16:56.491 Namespace ID:1 00:16:56.491 Error Recovery Timeout: Unlimited 00:16:56.491 Command Set Identifier: NVM (00h) 00:16:56.491 Deallocate: Supported 00:16:56.491 Deallocated/Unwritten Error: Not Supported 00:16:56.491 Deallocated Read Value: Unknown 00:16:56.491 Deallocate in Write Zeroes: Not Supported 00:16:56.491 Deallocated Guard Field: 0xFFFF 00:16:56.491 Flush: Supported 00:16:56.491 Reservation: Supported 00:16:56.491 Namespace Sharing Capabilities: Multiple Controllers 00:16:56.491 Size (in LBAs): 131072 (0GiB) 00:16:56.491 Capacity (in LBAs): 131072 (0GiB) 00:16:56.491 Utilization (in LBAs): 131072 (0GiB) 00:16:56.491 NGUID: F28A5324F99A49EB9759968707300154 00:16:56.491 UUID: f28a5324-f99a-49eb-9759-968707300154 00:16:56.491 Thin Provisioning: Not Supported 00:16:56.491 Per-NS Atomic Units: Yes 00:16:56.491 Atomic Boundary Size (Normal): 0 00:16:56.491 Atomic Boundary Size (PFail): 0 00:16:56.491 Atomic Boundary Offset: 0 00:16:56.491 Maximum Single Source Range Length: 65535 00:16:56.491 Maximum Copy Length: 65535 00:16:56.491 Maximum Source Range Count: 1 00:16:56.491 NGUID/EUI64 Never Reused: No 00:16:56.491 Namespace Write Protected: No 00:16:56.491 Number of LBA Formats: 1 00:16:56.491 Current LBA Format: LBA Format #00 00:16:56.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:56.491 00:16:56.491 17:27:04 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:56.491 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.776 Initializing NVMe Controllers 00:17:01.776 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:01.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:01.776 Initialization complete. Launching workers. 00:17:01.776 ======================================================== 00:17:01.776 Latency(us) 00:17:01.776 Device Information : IOPS MiB/s Average min max 00:17:01.776 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.40 156.08 3205.90 842.32 6858.91 00:17:01.776 ======================================================== 00:17:01.776 Total : 39956.40 156.08 3205.90 842.32 6858.91 00:17:01.776 00:17:01.776 17:27:10 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:01.776 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.064 Initializing NVMe Controllers 00:17:07.064 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:07.064 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:07.064 Initialization complete. Launching workers. 00:17:07.064 ======================================================== 00:17:07.064 Latency(us) 00:17:07.064 Device Information : IOPS MiB/s Average min max 00:17:07.064 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37771.40 147.54 3388.78 1085.46 9674.77 00:17:07.064 ======================================================== 00:17:07.065 Total : 37771.40 147.54 3388.78 1085.46 9674.77 00:17:07.065 00:17:07.065 17:27:15 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:07.065 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.352 Initializing NVMe Controllers 00:17:12.352 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:12.352 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:12.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:12.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:12.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:12.352 Initialization complete. Launching workers. 00:17:12.352 Starting thread on core 2 00:17:12.352 Starting thread on core 3 00:17:12.352 Starting thread on core 1 00:17:12.352 17:27:20 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:12.352 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.652 Initializing NVMe Controllers 00:17:15.652 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:15.652 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:15.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:15.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:15.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:15.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:15.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:15.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:15.652 Initialization complete. Launching workers. 00:17:15.652 Starting thread on core 1 with urgent priority queue 00:17:15.652 Starting thread on core 2 with urgent priority queue 00:17:15.652 Starting thread on core 3 with urgent priority queue 00:17:15.652 Starting thread on core 0 with urgent priority queue 00:17:15.652 SPDK bdev Controller (SPDK2 ) core 0: 15732.33 IO/s 6.36 secs/100000 ios 00:17:15.652 SPDK bdev Controller (SPDK2 ) core 1: 14711.33 IO/s 6.80 secs/100000 ios 00:17:15.652 SPDK bdev Controller (SPDK2 ) core 2: 15649.67 IO/s 6.39 secs/100000 ios 00:17:15.652 SPDK bdev Controller (SPDK2 ) core 3: 10166.67 IO/s 9.84 secs/100000 ios 00:17:15.652 ======================================================== 00:17:15.652 00:17:15.652 17:27:24 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:15.652 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.913 Initializing NVMe Controllers 00:17:15.913 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:15.913 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:15.913 Namespace ID: 1 size: 0GB 00:17:15.913 Initialization complete. 00:17:15.913 INFO: using host memory buffer for IO 00:17:15.913 Hello world! 00:17:15.913 17:27:24 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:15.913 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.298 Initializing NVMe Controllers 00:17:17.298 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.298 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.298 Initialization complete. Launching workers. 00:17:17.298 submit (in ns) avg, min, max = 9479.2, 3844.2, 4000203.3 00:17:17.298 complete (in ns) avg, min, max = 17533.2, 2379.2, 3998778.3 00:17:17.298 00:17:17.298 Submit histogram 00:17:17.298 ================ 00:17:17.298 Range in us Cumulative Count 00:17:17.298 3.840 - 3.867: 0.2463% ( 47) 00:17:17.298 3.867 - 3.893: 1.7504% ( 287) 00:17:17.298 3.893 - 3.920: 6.0741% ( 825) 00:17:17.298 3.920 - 3.947: 13.6628% ( 1448) 00:17:17.298 3.947 - 3.973: 23.4474% ( 1867) 00:17:17.298 3.973 - 4.000: 34.8724% ( 2180) 00:17:17.298 4.000 - 4.027: 49.0488% ( 2705) 00:17:17.298 4.027 - 4.053: 65.6203% ( 3162) 00:17:17.298 4.053 - 4.080: 80.4937% ( 2838) 00:17:17.298 4.080 - 4.107: 90.5718% ( 1923) 00:17:17.298 4.107 - 4.133: 95.9593% ( 1028) 00:17:17.298 4.133 - 4.160: 98.2443% ( 436) 00:17:17.298 4.160 - 4.187: 99.0357% ( 151) 00:17:17.298 4.187 - 4.213: 99.2977% ( 50) 00:17:17.298 4.213 - 4.240: 99.3659% ( 13) 00:17:17.298 4.240 - 4.267: 99.3763% ( 2) 00:17:17.298 4.347 - 4.373: 99.3868% ( 2) 00:17:17.298 4.693 - 4.720: 99.3921% ( 1) 00:17:17.298 4.720 - 4.747: 99.3973% ( 1) 00:17:17.298 4.747 - 4.773: 99.4025% ( 1) 00:17:17.298 4.773 - 4.800: 99.4078% ( 1) 00:17:17.298 4.800 - 4.827: 99.4183% ( 2) 00:17:17.298 4.827 - 4.853: 99.4288% ( 2) 00:17:17.298 4.853 - 4.880: 99.4340% ( 1) 00:17:17.298 4.960 - 4.987: 99.4392% ( 1) 00:17:17.298 5.013 - 5.040: 99.4445% ( 1) 00:17:17.298 5.040 - 5.067: 99.4497% ( 1) 00:17:17.298 5.227 - 5.253: 99.4550% ( 1) 00:17:17.298 5.253 - 5.280: 99.4602% ( 1) 00:17:17.298 5.280 - 5.307: 99.4707% ( 2) 00:17:17.298 5.360 - 5.387: 99.4759% ( 1) 00:17:17.298 5.387 - 5.413: 99.4812% ( 1) 00:17:17.298 5.547 - 5.573: 99.4864% ( 1) 00:17:17.298 5.680 - 5.707: 99.4916% ( 1) 00:17:17.298 5.707 - 5.733: 99.4969% ( 1) 00:17:17.298 5.867 - 5.893: 99.5021% ( 1) 00:17:17.298 5.920 - 5.947: 99.5126% ( 2) 00:17:17.298 5.973 - 6.000: 99.5231% ( 2) 00:17:17.298 6.027 - 6.053: 99.5336% ( 2) 00:17:17.298 6.080 - 6.107: 99.5440% ( 2) 00:17:17.298 6.107 - 6.133: 99.5493% ( 1) 00:17:17.298 6.160 - 6.187: 99.5598% ( 2) 00:17:17.298 6.187 - 6.213: 99.5650% ( 1) 00:17:17.298 6.213 - 6.240: 99.5703% ( 1) 00:17:17.298 6.240 - 6.267: 99.5755% ( 1) 00:17:17.298 6.373 - 6.400: 99.5860% ( 2) 00:17:17.298 6.453 - 6.480: 99.5912% ( 1) 00:17:17.298 6.480 - 6.507: 99.5965% ( 1) 00:17:17.298 6.560 - 6.587: 99.6017% ( 1) 00:17:17.298 6.587 - 6.613: 99.6069% ( 1) 00:17:17.298 6.613 - 6.640: 99.6122% ( 1) 00:17:17.298 6.640 - 6.667: 99.6227% ( 2) 00:17:17.298 6.720 - 6.747: 99.6384% ( 3) 00:17:17.298 6.747 - 6.773: 99.6436% ( 1) 00:17:17.298 6.800 - 6.827: 99.6593% ( 3) 00:17:17.298 6.827 - 6.880: 99.6751% ( 3) 00:17:17.298 6.987 - 7.040: 99.6803% ( 1) 00:17:17.298 7.040 - 7.093: 99.6960% ( 3) 00:17:17.298 7.093 - 7.147: 99.7118% ( 3) 00:17:17.298 7.147 - 7.200: 99.7275% ( 3) 00:17:17.298 7.200 - 7.253: 99.7327% ( 1) 00:17:17.298 7.253 - 7.307: 99.7432% ( 2) 00:17:17.298 7.307 - 7.360: 99.7642% ( 4) 00:17:17.298 7.413 - 7.467: 99.7694% ( 1) 00:17:17.298 7.467 - 7.520: 99.7746% ( 1) 00:17:17.298 7.520 - 7.573: 99.7904% ( 3) 00:17:17.298 7.573 - 7.627: 99.7956% ( 1) 00:17:17.298 7.627 - 7.680: 99.8008% ( 1) 00:17:17.298 7.680 - 7.733: 99.8061% ( 1) 00:17:17.298 7.787 - 7.840: 99.8113% ( 1) 00:17:17.298 7.840 - 7.893: 99.8166% ( 1) 00:17:17.298 7.893 - 7.947: 99.8271% ( 2) 00:17:17.298 7.947 - 8.000: 99.8323% ( 1) 00:17:17.298 8.160 - 8.213: 99.8375% ( 1) 00:17:17.298 8.907 - 8.960: 99.8480% ( 2) 00:17:17.298 8.960 - 9.013: 99.8533% ( 1) 00:17:17.298 13.973 - 14.080: 99.8585% ( 1) 00:17:17.298 14.720 - 14.827: 99.8637% ( 1) 00:17:17.298 3986.773 - 4014.080: 100.0000% ( 26) 00:17:17.298 00:17:17.298 Complete histogram 00:17:17.298 ================== 00:17:17.298 Range in us Cumulative Count 00:17:17.298 2.373 - 2.387: 0.1101% ( 21) 00:17:17.298 2.387 - 2.400: 0.5608% ( 86) 00:17:17.298 2.400 - 2.413: 0.9224% ( 69) 00:17:17.298 2.413 - 2.427: 1.3102% ( 74) 00:17:17.298 2.427 - 2.440: 34.6313% ( 6358) 00:17:17.298 2.440 - 2.453: 61.1289% ( 5056) 00:17:17.298 2.453 - 2.467: 78.8848% ( 3388) 00:17:17.298 2.467 - 2.480: 90.8915% ( 2291) 00:17:17.298 2.480 - 2.493: 95.1732% ( 817) 00:17:17.298 2.493 - 2.507: 96.7297% ( 297) 00:17:17.298 2.507 - 2.520: 97.8775% ( 219) 00:17:17.298 2.520 - 2.533: 98.5850% ( 135) 00:17:17.298 2.533 - 2.547: 99.0671% ( 92) 00:17:17.298 2.547 - 2.560: 99.3239% ( 49) 00:17:17.298 2.560 - 2.573: 99.3711% ( 9) 00:17:17.298 2.573 - 2.587: 99.3816% ( 2) 00:17:17.298 2.813 - 2.827: 99.3868% ( 1) 00:17:17.298 2.867 - 2.880: 99.3921% ( 1) 00:17:17.298 2.987 - 3.000: 99.3973% ( 1) 00:17:17.298 3.000 - 3.013: 99.4025% ( 1) 00:17:17.298 3.040 - 3.053: 99.4078% ( 1) 00:17:17.298 3.080 - 3.093: 99.4130% ( 1) 00:17:17.298 3.187 - 3.200: 99.4183% ( 1) 00:17:17.298 3.213 - 3.227: 99.4235% ( 1) 00:17:17.298 4.267 - 4.293: 99.4288% ( 1) 00:17:17.298 4.453 - 4.480: 99.4392% ( 2) 00:17:17.298 4.800 - 4.827: 99.4445% ( 1) 00:17:17.298 4.827 - 4.853: 99.4497% ( 1) 00:17:17.298 4.853 - 4.880: 99.4550% ( 1) 00:17:17.298 5.040 - 5.067: 99.4602% ( 1) 00:17:17.298 5.200 - 5.227: 99.4654% ( 1) 00:17:17.298 5.227 - 5.253: 99.4759% ( 2) 00:17:17.298 5.253 - 5.280: 99.4812% ( 1) 00:17:17.298 5.333 - 5.360: 99.4864% ( 1) 00:17:17.298 5.360 - 5.387: 99.4916% ( 1) 00:17:17.298 5.493 - 5.520: 99.4969% ( 1) 00:17:17.298 5.520 - 5.547: 99.5021% ( 1) 00:17:17.298 5.547 - 5.573: 99.5074% ( 1) 00:17:17.298 5.653 - 5.680: 99.5126% ( 1) 00:17:17.298 5.707 - 5.733: 99.5178% ( 1) 00:17:17.298 5.840 - 5.867: 99.5283% ( 2) 00:17:17.298 5.867 - 5.893: 99.5336% ( 1) 00:17:17.298 5.893 - 5.920: 99.5388% ( 1) 00:17:17.298 5.920 - 5.947: 99.5493% ( 2) 00:17:17.298 6.000 - 6.027: 99.5545% ( 1) 00:17:17.298 6.027 - 6.053: 99.5598% ( 1) 00:17:17.298 6.053 - 6.080: 99.5650% ( 1) 00:17:17.298 6.107 - 6.133: 99.5703% ( 1) 00:17:17.298 6.133 - 6.160: 99.5807% ( 2) 00:17:17.298 6.240 - 6.267: 99.5860% ( 1) 00:17:17.298 6.267 - 6.293: 99.5912% ( 1) 00:17:17.298 6.320 - 6.347: 99.5965% ( 1) 00:17:17.298 6.453 - 6.480: 99.6017% ( 1) 00:17:17.298 7.360 - 7.413: 99.6069% ( 1) 00:17:17.298 10.027 - 10.080: 99.6122% ( 1) 00:17:17.298 10.080 - 10.133: 99.6174% ( 1) 00:17:17.298 13.280 - 13.333: 99.6227% ( 1) 00:17:17.298 3986.773 - 4014.080: 100.0000% ( 72) 00:17:17.298 00:17:17.298 17:27:25 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:17.298 17:27:25 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:17.298 17:27:25 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:17.298 17:27:25 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:17.298 17:27:25 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:17.298 [ 00:17:17.298 { 00:17:17.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:17.298 "subtype": "Discovery", 00:17:17.298 "listen_addresses": [], 00:17:17.298 "allow_any_host": true, 00:17:17.298 "hosts": [] 00:17:17.298 }, 00:17:17.298 { 00:17:17.298 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:17.298 "subtype": "NVMe", 00:17:17.298 "listen_addresses": [ 00:17:17.298 { 00:17:17.298 "transport": "VFIOUSER", 00:17:17.298 "trtype": "VFIOUSER", 00:17:17.298 "adrfam": "IPv4", 00:17:17.298 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:17.299 "trsvcid": "0" 00:17:17.299 } 00:17:17.299 ], 00:17:17.299 "allow_any_host": true, 00:17:17.299 "hosts": [], 00:17:17.299 "serial_number": "SPDK1", 00:17:17.299 "model_number": "SPDK bdev Controller", 00:17:17.299 "max_namespaces": 32, 00:17:17.299 "min_cntlid": 1, 00:17:17.299 "max_cntlid": 65519, 00:17:17.299 "namespaces": [ 00:17:17.299 { 00:17:17.299 "nsid": 1, 00:17:17.299 "bdev_name": "Malloc1", 00:17:17.299 "name": "Malloc1", 00:17:17.299 "nguid": "12025160BD1043CE955E9A49D21496FC", 00:17:17.299 "uuid": "12025160-bd10-43ce-955e-9a49d21496fc" 00:17:17.299 }, 00:17:17.299 { 00:17:17.299 "nsid": 2, 00:17:17.299 "bdev_name": "Malloc3", 00:17:17.299 "name": "Malloc3", 00:17:17.299 "nguid": "E74C6FA6F82442A991DD3D812C24038B", 00:17:17.299 "uuid": "e74c6fa6-f824-42a9-91dd-3d812c24038b" 00:17:17.299 } 00:17:17.299 ] 00:17:17.299 }, 00:17:17.299 { 00:17:17.299 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:17.299 "subtype": "NVMe", 00:17:17.299 "listen_addresses": [ 00:17:17.299 { 00:17:17.299 "transport": "VFIOUSER", 00:17:17.299 "trtype": "VFIOUSER", 00:17:17.299 "adrfam": "IPv4", 00:17:17.299 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:17.299 "trsvcid": "0" 00:17:17.299 } 00:17:17.299 ], 00:17:17.299 "allow_any_host": true, 00:17:17.299 "hosts": [], 00:17:17.299 "serial_number": "SPDK2", 00:17:17.299 "model_number": "SPDK bdev Controller", 00:17:17.299 "max_namespaces": 32, 00:17:17.299 "min_cntlid": 1, 00:17:17.299 "max_cntlid": 65519, 00:17:17.299 "namespaces": [ 00:17:17.299 { 00:17:17.299 "nsid": 1, 00:17:17.299 "bdev_name": "Malloc2", 00:17:17.299 "name": "Malloc2", 00:17:17.299 "nguid": "F28A5324F99A49EB9759968707300154", 00:17:17.299 "uuid": "f28a5324-f99a-49eb-9759-968707300154" 00:17:17.299 } 00:17:17.299 ] 00:17:17.299 } 00:17:17.299 ] 00:17:17.560 17:27:25 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:17.560 17:27:25 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:17.560 17:27:25 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3149958 00:17:17.560 17:27:25 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:17.560 17:27:25 -- common/autotest_common.sh@1244 -- # local i=0 00:17:17.560 17:27:25 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:17.560 17:27:25 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:17.560 17:27:25 -- common/autotest_common.sh@1255 -- # return 0 00:17:17.560 17:27:25 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:17.560 17:27:25 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:17.560 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.560 Malloc4 00:17:17.560 17:27:26 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:17.821 17:27:26 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:17.821 Asynchronous Event Request test 00:17:17.821 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.821 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.821 Registering asynchronous event callbacks... 00:17:17.821 Starting namespace attribute notice tests for all controllers... 00:17:17.821 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:17.821 aer_cb - Changed Namespace 00:17:17.821 Cleaning up... 00:17:18.082 [ 00:17:18.082 { 00:17:18.082 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:18.082 "subtype": "Discovery", 00:17:18.082 "listen_addresses": [], 00:17:18.082 "allow_any_host": true, 00:17:18.082 "hosts": [] 00:17:18.082 }, 00:17:18.082 { 00:17:18.082 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:18.082 "subtype": "NVMe", 00:17:18.082 "listen_addresses": [ 00:17:18.082 { 00:17:18.082 "transport": "VFIOUSER", 00:17:18.082 "trtype": "VFIOUSER", 00:17:18.082 "adrfam": "IPv4", 00:17:18.082 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:18.082 "trsvcid": "0" 00:17:18.082 } 00:17:18.082 ], 00:17:18.082 "allow_any_host": true, 00:17:18.082 "hosts": [], 00:17:18.082 "serial_number": "SPDK1", 00:17:18.082 "model_number": "SPDK bdev Controller", 00:17:18.082 "max_namespaces": 32, 00:17:18.082 "min_cntlid": 1, 00:17:18.082 "max_cntlid": 65519, 00:17:18.082 "namespaces": [ 00:17:18.082 { 00:17:18.082 "nsid": 1, 00:17:18.082 "bdev_name": "Malloc1", 00:17:18.082 "name": "Malloc1", 00:17:18.082 "nguid": "12025160BD1043CE955E9A49D21496FC", 00:17:18.082 "uuid": "12025160-bd10-43ce-955e-9a49d21496fc" 00:17:18.082 }, 00:17:18.082 { 00:17:18.082 "nsid": 2, 00:17:18.082 "bdev_name": "Malloc3", 00:17:18.082 "name": "Malloc3", 00:17:18.082 "nguid": "E74C6FA6F82442A991DD3D812C24038B", 00:17:18.082 "uuid": "e74c6fa6-f824-42a9-91dd-3d812c24038b" 00:17:18.082 } 00:17:18.082 ] 00:17:18.082 }, 00:17:18.082 { 00:17:18.082 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:18.082 "subtype": "NVMe", 00:17:18.082 "listen_addresses": [ 00:17:18.082 { 00:17:18.082 "transport": "VFIOUSER", 00:17:18.082 "trtype": "VFIOUSER", 00:17:18.082 "adrfam": "IPv4", 00:17:18.082 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:18.082 "trsvcid": "0" 00:17:18.082 } 00:17:18.082 ], 00:17:18.082 "allow_any_host": true, 00:17:18.082 "hosts": [], 00:17:18.082 "serial_number": "SPDK2", 00:17:18.082 "model_number": "SPDK bdev Controller", 00:17:18.082 "max_namespaces": 32, 00:17:18.082 "min_cntlid": 1, 00:17:18.082 "max_cntlid": 65519, 00:17:18.082 "namespaces": [ 00:17:18.082 { 00:17:18.082 "nsid": 1, 00:17:18.082 "bdev_name": "Malloc2", 00:17:18.082 "name": "Malloc2", 00:17:18.082 "nguid": "F28A5324F99A49EB9759968707300154", 00:17:18.082 "uuid": "f28a5324-f99a-49eb-9759-968707300154" 00:17:18.082 }, 00:17:18.082 { 00:17:18.082 "nsid": 2, 00:17:18.082 "bdev_name": "Malloc4", 00:17:18.082 "name": "Malloc4", 00:17:18.082 "nguid": "C0ECE1C8187648E8ACF596BE20A3AC74", 00:17:18.082 "uuid": "c0ece1c8-1876-48e8-acf5-96be20a3ac74" 00:17:18.082 } 00:17:18.082 ] 00:17:18.082 } 00:17:18.082 ] 00:17:18.082 17:27:26 -- target/nvmf_vfio_user.sh@44 -- # wait 3149958 00:17:18.083 17:27:26 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:18.083 17:27:26 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3140202 00:17:18.083 17:27:26 -- common/autotest_common.sh@926 -- # '[' -z 3140202 ']' 00:17:18.083 17:27:26 -- common/autotest_common.sh@930 -- # kill -0 3140202 00:17:18.083 17:27:26 -- common/autotest_common.sh@931 -- # uname 00:17:18.083 17:27:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:18.083 17:27:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3140202 00:17:18.083 17:27:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:18.083 17:27:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:18.083 17:27:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3140202' 00:17:18.083 killing process with pid 3140202 00:17:18.083 17:27:26 -- common/autotest_common.sh@945 -- # kill 3140202 00:17:18.083 [2024-10-13 17:27:26.450475] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:18.083 17:27:26 -- common/autotest_common.sh@950 -- # wait 3140202 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3150000 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3150000' 00:17:18.345 Process pid: 3150000 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:18.345 17:27:26 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3150000 00:17:18.345 17:27:26 -- common/autotest_common.sh@819 -- # '[' -z 3150000 ']' 00:17:18.345 17:27:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.345 17:27:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:18.345 17:27:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.345 17:27:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:18.345 17:27:26 -- common/autotest_common.sh@10 -- # set +x 00:17:18.345 [2024-10-13 17:27:26.671642] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:18.345 [2024-10-13 17:27:26.672592] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:18.345 [2024-10-13 17:27:26.672635] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.345 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.345 [2024-10-13 17:27:26.736220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.345 [2024-10-13 17:27:26.767272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:18.345 [2024-10-13 17:27:26.767411] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.345 [2024-10-13 17:27:26.767422] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.345 [2024-10-13 17:27:26.767436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.345 [2024-10-13 17:27:26.767603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.345 [2024-10-13 17:27:26.767719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.345 [2024-10-13 17:27:26.767875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.345 [2024-10-13 17:27:26.767876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.345 [2024-10-13 17:27:26.834003] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:17:18.345 [2024-10-13 17:27:26.834022] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:17:18.345 [2024-10-13 17:27:26.834311] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:17:18.345 [2024-10-13 17:27:26.834507] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:18.345 [2024-10-13 17:27:26.834596] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:17:19.288 17:27:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.288 17:27:27 -- common/autotest_common.sh@852 -- # return 0 00:17:19.288 17:27:27 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:20.231 17:27:28 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:20.231 17:27:28 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:20.231 17:27:28 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:20.231 17:27:28 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:20.231 17:27:28 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:20.231 17:27:28 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:20.492 Malloc1 00:17:20.492 17:27:28 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:20.492 17:27:28 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:20.753 17:27:29 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:21.014 17:27:29 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:21.014 17:27:29 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:21.014 17:27:29 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:21.014 Malloc2 00:17:21.014 17:27:29 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:21.275 17:27:29 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:21.535 17:27:29 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:21.535 17:27:29 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:21.535 17:27:29 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3150000 00:17:21.535 17:27:29 -- common/autotest_common.sh@926 -- # '[' -z 3150000 ']' 00:17:21.535 17:27:29 -- common/autotest_common.sh@930 -- # kill -0 3150000 00:17:21.535 17:27:29 -- common/autotest_common.sh@931 -- # uname 00:17:21.535 17:27:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.535 17:27:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3150000 00:17:21.535 17:27:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.535 17:27:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.535 17:27:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3150000' 00:17:21.535 killing process with pid 3150000 00:17:21.535 17:27:30 -- common/autotest_common.sh@945 -- # kill 3150000 00:17:21.535 17:27:30 -- common/autotest_common.sh@950 -- # wait 3150000 00:17:21.797 17:27:30 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:21.797 17:27:30 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:21.797 00:17:21.797 real 0m50.563s 00:17:21.797 user 3m20.880s 00:17:21.797 sys 0m3.015s 00:17:21.797 17:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.797 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:17:21.797 ************************************ 00:17:21.797 END TEST nvmf_vfio_user 00:17:21.797 ************************************ 00:17:21.797 17:27:30 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:21.797 17:27:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:21.797 17:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:21.797 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:17:21.797 ************************************ 00:17:21.797 START TEST nvmf_vfio_user_nvme_compliance 00:17:21.797 ************************************ 00:17:21.797 17:27:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:21.797 * Looking for test storage... 00:17:21.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:21.797 17:27:30 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.797 17:27:30 -- nvmf/common.sh@7 -- # uname -s 00:17:21.797 17:27:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.797 17:27:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.797 17:27:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.797 17:27:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.797 17:27:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.797 17:27:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.797 17:27:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.797 17:27:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.797 17:27:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.797 17:27:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.797 17:27:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.797 17:27:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.797 17:27:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.797 17:27:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.797 17:27:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.797 17:27:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.797 17:27:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.797 17:27:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.797 17:27:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.797 17:27:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.797 17:27:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.797 17:27:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.797 17:27:30 -- paths/export.sh@5 -- # export PATH 00:17:21.797 17:27:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.797 17:27:30 -- nvmf/common.sh@46 -- # : 0 00:17:21.797 17:27:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:21.797 17:27:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:21.797 17:27:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:21.797 17:27:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.797 17:27:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.797 17:27:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:21.797 17:27:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:21.797 17:27:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:22.058 17:27:30 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.058 17:27:30 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.058 17:27:30 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:22.058 17:27:30 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:22.058 17:27:30 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:22.058 17:27:30 -- compliance/compliance.sh@20 -- # nvmfpid=3150854 00:17:22.058 17:27:30 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3150854' 00:17:22.058 Process pid: 3150854 00:17:22.058 17:27:30 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:22.058 17:27:30 -- compliance/compliance.sh@24 -- # waitforlisten 3150854 00:17:22.058 17:27:30 -- common/autotest_common.sh@819 -- # '[' -z 3150854 ']' 00:17:22.058 17:27:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.058 17:27:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.058 17:27:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.059 17:27:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.059 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:17:22.059 17:27:30 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:22.059 [2024-10-13 17:27:30.373492] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:22.059 [2024-10-13 17:27:30.373556] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.059 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.059 [2024-10-13 17:27:30.438896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:22.059 [2024-10-13 17:27:30.472753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.059 [2024-10-13 17:27:30.472882] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.059 [2024-10-13 17:27:30.472891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.059 [2024-10-13 17:27:30.472899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.059 [2024-10-13 17:27:30.473102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.059 [2024-10-13 17:27:30.473205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.059 [2024-10-13 17:27:30.473320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.999 17:27:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.999 17:27:31 -- common/autotest_common.sh@852 -- # return 0 00:17:22.999 17:27:31 -- compliance/compliance.sh@26 -- # sleep 1 00:17:23.942 17:27:32 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:23.942 17:27:32 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:23.942 17:27:32 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:23.942 17:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.942 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:17:23.942 17:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.942 17:27:32 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:23.942 17:27:32 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:23.942 17:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.942 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:17:23.942 malloc0 00:17:23.942 17:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.942 17:27:32 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:23.942 17:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.942 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:17:23.942 17:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.942 17:27:32 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:23.942 17:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.942 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:17:23.942 17:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.942 17:27:32 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:23.942 17:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.942 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:17:23.942 17:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.942 17:27:32 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:23.942 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.942 00:17:23.942 00:17:23.942 CUnit - A unit testing framework for C - Version 2.1-3 00:17:23.942 http://cunit.sourceforge.net/ 00:17:23.942 00:17:23.942 00:17:23.942 Suite: nvme_compliance 00:17:23.942 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-13 17:27:32.422839] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:23.942 [2024-10-13 17:27:32.422864] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:23.942 [2024-10-13 17:27:32.422869] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:23.942 passed 00:17:24.202 Test: admin_identify_ctrlr_verify_fused ...passed 00:17:24.202 Test: admin_identify_ns ...[2024-10-13 17:27:32.677091] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:24.202 [2024-10-13 17:27:32.685071] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:24.461 passed 00:17:24.461 Test: admin_get_features_mandatory_features ...passed 00:17:24.461 Test: admin_get_features_optional_features ...passed 00:17:24.721 Test: admin_set_features_number_of_queues ...passed 00:17:24.721 Test: admin_get_log_page_mandatory_logs ...passed 00:17:24.981 Test: admin_get_log_page_with_lpo ...[2024-10-13 17:27:33.356073] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:24.981 passed 00:17:24.981 Test: fabric_property_get ...passed 00:17:25.241 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-13 17:27:33.562124] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:25.241 passed 00:17:25.241 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-13 17:27:33.742075] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:25.241 [2024-10-13 17:27:33.758068] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:25.502 passed 00:17:25.502 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-13 17:27:33.858028] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:25.502 passed 00:17:25.761 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-13 17:27:34.028073] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:25.761 [2024-10-13 17:27:34.052072] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:25.761 passed 00:17:25.761 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-13 17:27:34.152413] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:25.761 [2024-10-13 17:27:34.152437] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:25.761 passed 00:17:26.020 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-13 17:27:34.340070] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:26.020 [2024-10-13 17:27:34.350069] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:26.020 [2024-10-13 17:27:34.358071] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:26.020 [2024-10-13 17:27:34.366071] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:26.020 passed 00:17:26.020 Test: admin_create_io_sq_verify_pc ...[2024-10-13 17:27:34.507077] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:26.280 passed 00:17:27.220 Test: admin_create_io_qp_max_qps ...[2024-10-13 17:27:35.723074] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:27.789 passed 00:17:28.049 Test: admin_create_io_sq_shared_cq ...[2024-10-13 17:27:36.325072] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:28.049 passed 00:17:28.049 00:17:28.049 Run Summary: Type Total Ran Passed Failed Inactive 00:17:28.049 suites 1 1 n/a 0 0 00:17:28.049 tests 18 18 18 0 0 00:17:28.049 asserts 360 360 360 0 n/a 00:17:28.049 00:17:28.049 Elapsed time = 1.648 seconds 00:17:28.049 17:27:36 -- compliance/compliance.sh@42 -- # killprocess 3150854 00:17:28.049 17:27:36 -- common/autotest_common.sh@926 -- # '[' -z 3150854 ']' 00:17:28.049 17:27:36 -- common/autotest_common.sh@930 -- # kill -0 3150854 00:17:28.049 17:27:36 -- common/autotest_common.sh@931 -- # uname 00:17:28.049 17:27:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.049 17:27:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3150854 00:17:28.049 17:27:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.049 17:27:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.049 17:27:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3150854' 00:17:28.049 killing process with pid 3150854 00:17:28.049 17:27:36 -- common/autotest_common.sh@945 -- # kill 3150854 00:17:28.049 17:27:36 -- common/autotest_common.sh@950 -- # wait 3150854 00:17:28.310 17:27:36 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:28.310 17:27:36 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:28.310 00:17:28.310 real 0m6.380s 00:17:28.310 user 0m18.437s 00:17:28.310 sys 0m0.475s 00:17:28.310 17:27:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.310 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:17:28.310 ************************************ 00:17:28.310 END TEST nvmf_vfio_user_nvme_compliance 00:17:28.310 ************************************ 00:17:28.310 17:27:36 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:28.310 17:27:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:28.310 17:27:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:28.310 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:17:28.310 ************************************ 00:17:28.310 START TEST nvmf_vfio_user_fuzz 00:17:28.310 ************************************ 00:17:28.310 17:27:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:28.310 * Looking for test storage... 00:17:28.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.310 17:27:36 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.310 17:27:36 -- nvmf/common.sh@7 -- # uname -s 00:17:28.310 17:27:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.310 17:27:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.310 17:27:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.310 17:27:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.310 17:27:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.310 17:27:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.310 17:27:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.310 17:27:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.310 17:27:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.310 17:27:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.310 17:27:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.310 17:27:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.310 17:27:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.310 17:27:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.310 17:27:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.310 17:27:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.310 17:27:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.310 17:27:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.310 17:27:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.311 17:27:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.311 17:27:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.311 17:27:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.311 17:27:36 -- paths/export.sh@5 -- # export PATH 00:17:28.311 17:27:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.311 17:27:36 -- nvmf/common.sh@46 -- # : 0 00:17:28.311 17:27:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:28.311 17:27:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:28.311 17:27:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:28.311 17:27:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.311 17:27:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.311 17:27:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:28.311 17:27:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:28.311 17:27:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3152140 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3152140' 00:17:28.311 Process pid: 3152140 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:28.311 17:27:36 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3152140 00:17:28.311 17:27:36 -- common/autotest_common.sh@819 -- # '[' -z 3152140 ']' 00:17:28.311 17:27:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.311 17:27:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.311 17:27:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.311 17:27:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.311 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:17:29.251 17:27:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.251 17:27:37 -- common/autotest_common.sh@852 -- # return 0 00:17:29.251 17:27:37 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:30.192 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.192 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:30.192 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:30.192 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.192 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:30.192 malloc0 00:17:30.192 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:30.192 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.192 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:30.192 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:30.192 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.192 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:30.192 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:30.192 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.192 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:30.192 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:30.192 17:27:38 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:02.317 Fuzzing completed. Shutting down the fuzz application 00:18:02.317 00:18:02.317 Dumping successful admin opcodes: 00:18:02.317 8, 9, 10, 24, 00:18:02.317 Dumping successful io opcodes: 00:18:02.317 0, 00:18:02.317 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1227868, total successful commands: 4820, random_seed: 2972098880 00:18:02.317 NS: 0x200003a1ef00 admin qp, Total commands completed: 180665, total successful commands: 1457, random_seed: 1074017728 00:18:02.317 17:28:09 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:02.317 17:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.317 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.317 17:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.317 17:28:09 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3152140 00:18:02.317 17:28:09 -- common/autotest_common.sh@926 -- # '[' -z 3152140 ']' 00:18:02.317 17:28:09 -- common/autotest_common.sh@930 -- # kill -0 3152140 00:18:02.317 17:28:09 -- common/autotest_common.sh@931 -- # uname 00:18:02.317 17:28:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.317 17:28:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3152140 00:18:02.317 17:28:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.317 17:28:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.317 17:28:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3152140' 00:18:02.317 killing process with pid 3152140 00:18:02.317 17:28:09 -- common/autotest_common.sh@945 -- # kill 3152140 00:18:02.317 17:28:09 -- common/autotest_common.sh@950 -- # wait 3152140 00:18:02.317 17:28:09 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:02.317 17:28:09 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:02.317 00:18:02.317 real 0m32.702s 00:18:02.317 user 0m35.291s 00:18:02.317 sys 0m26.441s 00:18:02.317 17:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.317 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.318 ************************************ 00:18:02.318 END TEST nvmf_vfio_user_fuzz 00:18:02.318 ************************************ 00:18:02.318 17:28:09 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:02.318 17:28:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:02.318 17:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.318 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:18:02.318 ************************************ 00:18:02.318 START TEST nvmf_host_management 00:18:02.318 ************************************ 00:18:02.318 17:28:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:02.318 * Looking for test storage... 00:18:02.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.318 17:28:09 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.318 17:28:09 -- nvmf/common.sh@7 -- # uname -s 00:18:02.318 17:28:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.318 17:28:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.318 17:28:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.318 17:28:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.318 17:28:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.318 17:28:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.318 17:28:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.318 17:28:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.318 17:28:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.318 17:28:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.318 17:28:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.318 17:28:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.318 17:28:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.318 17:28:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.318 17:28:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.318 17:28:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.318 17:28:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.318 17:28:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.318 17:28:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.318 17:28:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.318 17:28:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.318 17:28:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.318 17:28:09 -- paths/export.sh@5 -- # export PATH 00:18:02.318 17:28:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.318 17:28:09 -- nvmf/common.sh@46 -- # : 0 00:18:02.318 17:28:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.318 17:28:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.318 17:28:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.318 17:28:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.318 17:28:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.318 17:28:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.318 17:28:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.318 17:28:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.318 17:28:09 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.318 17:28:09 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.318 17:28:09 -- target/host_management.sh@104 -- # nvmftestinit 00:18:02.318 17:28:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.318 17:28:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.318 17:28:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.318 17:28:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.318 17:28:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.318 17:28:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.318 17:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.318 17:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.318 17:28:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:02.318 17:28:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:02.318 17:28:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:02.318 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.907 17:28:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:08.907 17:28:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:08.907 17:28:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:08.907 17:28:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:08.907 17:28:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:08.907 17:28:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:08.907 17:28:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:08.907 17:28:16 -- nvmf/common.sh@294 -- # net_devs=() 00:18:08.907 17:28:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:08.907 17:28:16 -- nvmf/common.sh@295 -- # e810=() 00:18:08.907 17:28:16 -- nvmf/common.sh@295 -- # local -ga e810 00:18:08.907 17:28:16 -- nvmf/common.sh@296 -- # x722=() 00:18:08.907 17:28:16 -- nvmf/common.sh@296 -- # local -ga x722 00:18:08.907 17:28:16 -- nvmf/common.sh@297 -- # mlx=() 00:18:08.907 17:28:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:08.907 17:28:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.907 17:28:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:08.907 17:28:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:08.907 17:28:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:08.907 17:28:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:08.908 17:28:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:08.908 17:28:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:08.908 17:28:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:08.908 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:08.908 17:28:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:08.908 17:28:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:08.908 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:08.908 17:28:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:08.908 17:28:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:08.908 17:28:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.908 17:28:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:08.908 17:28:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.908 17:28:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:08.908 Found net devices under 0000:31:00.0: cvl_0_0 00:18:08.908 17:28:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.908 17:28:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:08.908 17:28:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.908 17:28:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:08.908 17:28:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.908 17:28:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:08.908 Found net devices under 0000:31:00.1: cvl_0_1 00:18:08.908 17:28:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.908 17:28:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:08.908 17:28:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:08.908 17:28:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:08.908 17:28:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.908 17:28:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.908 17:28:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.908 17:28:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:08.908 17:28:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.908 17:28:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.908 17:28:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:08.908 17:28:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.908 17:28:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.908 17:28:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:08.908 17:28:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:08.908 17:28:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.908 17:28:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.908 17:28:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.908 17:28:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.908 17:28:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:08.908 17:28:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.908 17:28:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.908 17:28:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.908 17:28:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:08.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:18:08.908 00:18:08.908 --- 10.0.0.2 ping statistics --- 00:18:08.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.908 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:18:08.908 17:28:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:18:08.908 00:18:08.908 --- 10.0.0.1 ping statistics --- 00:18:08.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.908 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:08.908 17:28:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.908 17:28:16 -- nvmf/common.sh@410 -- # return 0 00:18:08.908 17:28:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:08.908 17:28:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.908 17:28:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:08.908 17:28:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.908 17:28:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:08.908 17:28:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:08.908 17:28:16 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:18:08.908 17:28:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:08.908 17:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:08.908 17:28:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.908 ************************************ 00:18:08.908 START TEST nvmf_host_management 00:18:08.908 ************************************ 00:18:08.908 17:28:16 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:18:08.908 17:28:16 -- target/host_management.sh@69 -- # starttarget 00:18:08.908 17:28:16 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:08.908 17:28:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:08.908 17:28:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:08.908 17:28:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.908 17:28:16 -- nvmf/common.sh@469 -- # nvmfpid=3162279 00:18:08.908 17:28:16 -- nvmf/common.sh@470 -- # waitforlisten 3162279 00:18:08.908 17:28:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:08.908 17:28:16 -- common/autotest_common.sh@819 -- # '[' -z 3162279 ']' 00:18:08.908 17:28:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.908 17:28:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:08.908 17:28:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.908 17:28:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:08.908 17:28:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.908 [2024-10-13 17:28:16.960777] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:08.908 [2024-10-13 17:28:16.960846] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.908 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.908 [2024-10-13 17:28:17.052856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.908 [2024-10-13 17:28:17.100142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:08.908 [2024-10-13 17:28:17.100297] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.908 [2024-10-13 17:28:17.100306] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.908 [2024-10-13 17:28:17.100314] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.908 [2024-10-13 17:28:17.100441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.908 [2024-10-13 17:28:17.100604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.908 [2024-10-13 17:28:17.100770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.908 [2024-10-13 17:28:17.100772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:09.480 17:28:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:09.480 17:28:17 -- common/autotest_common.sh@852 -- # return 0 00:18:09.480 17:28:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:09.480 17:28:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:09.480 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 17:28:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.480 17:28:17 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.480 17:28:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.480 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 [2024-10-13 17:28:17.802324] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.480 17:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.480 17:28:17 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:09.480 17:28:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:09.480 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 17:28:17 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:09.480 17:28:17 -- target/host_management.sh@23 -- # cat 00:18:09.480 17:28:17 -- target/host_management.sh@30 -- # rpc_cmd 00:18:09.480 17:28:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.480 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 Malloc0 00:18:09.480 [2024-10-13 17:28:17.865626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.480 17:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.480 17:28:17 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:09.480 17:28:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:09.480 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 17:28:17 -- target/host_management.sh@73 -- # perfpid=3162639 00:18:09.480 17:28:17 -- target/host_management.sh@74 -- # waitforlisten 3162639 /var/tmp/bdevperf.sock 00:18:09.480 17:28:17 -- common/autotest_common.sh@819 -- # '[' -z 3162639 ']' 00:18:09.480 17:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.480 17:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:09.480 17:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.480 17:28:17 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:09.480 17:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:09.480 17:28:17 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:09.480 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 17:28:17 -- nvmf/common.sh@520 -- # config=() 00:18:09.480 17:28:17 -- nvmf/common.sh@520 -- # local subsystem config 00:18:09.480 17:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:09.480 17:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:09.480 { 00:18:09.480 "params": { 00:18:09.480 "name": "Nvme$subsystem", 00:18:09.480 "trtype": "$TEST_TRANSPORT", 00:18:09.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.480 "adrfam": "ipv4", 00:18:09.480 "trsvcid": "$NVMF_PORT", 00:18:09.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.480 "hdgst": ${hdgst:-false}, 00:18:09.480 "ddgst": ${ddgst:-false} 00:18:09.480 }, 00:18:09.480 "method": "bdev_nvme_attach_controller" 00:18:09.480 } 00:18:09.480 EOF 00:18:09.480 )") 00:18:09.480 17:28:17 -- nvmf/common.sh@542 -- # cat 00:18:09.480 17:28:17 -- nvmf/common.sh@544 -- # jq . 00:18:09.480 17:28:17 -- nvmf/common.sh@545 -- # IFS=, 00:18:09.480 17:28:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:09.480 "params": { 00:18:09.480 "name": "Nvme0", 00:18:09.480 "trtype": "tcp", 00:18:09.480 "traddr": "10.0.0.2", 00:18:09.480 "adrfam": "ipv4", 00:18:09.480 "trsvcid": "4420", 00:18:09.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:09.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:09.480 "hdgst": false, 00:18:09.480 "ddgst": false 00:18:09.480 }, 00:18:09.480 "method": "bdev_nvme_attach_controller" 00:18:09.480 }' 00:18:09.480 [2024-10-13 17:28:17.962329] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:09.480 [2024-10-13 17:28:17.962382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162639 ] 00:18:09.480 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.740 [2024-10-13 17:28:18.023711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.740 [2024-10-13 17:28:18.052697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.999 Running I/O for 10 seconds... 00:18:10.260 17:28:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:10.260 17:28:18 -- common/autotest_common.sh@852 -- # return 0 00:18:10.260 17:28:18 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:10.260 17:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:10.260 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.260 17:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:10.260 17:28:18 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.260 17:28:18 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:10.260 17:28:18 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:10.260 17:28:18 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:10.260 17:28:18 -- target/host_management.sh@52 -- # local ret=1 00:18:10.260 17:28:18 -- target/host_management.sh@53 -- # local i 00:18:10.260 17:28:18 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:10.260 17:28:18 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:10.260 17:28:18 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:10.260 17:28:18 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:10.260 17:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:10.260 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.260 17:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:10.522 17:28:18 -- target/host_management.sh@55 -- # read_io_count=1626 00:18:10.522 17:28:18 -- target/host_management.sh@58 -- # '[' 1626 -ge 100 ']' 00:18:10.522 17:28:18 -- target/host_management.sh@59 -- # ret=0 00:18:10.522 17:28:18 -- target/host_management.sh@60 -- # break 00:18:10.522 17:28:18 -- target/host_management.sh@64 -- # return 0 00:18:10.522 17:28:18 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:10.522 17:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:10.522 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.522 [2024-10-13 17:28:18.816879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.816995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.817149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(5) to be set 00:18:10.522 [2024-10-13 17:28:18.818989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.522 [2024-10-13 17:28:18.819030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.522 [2024-10-13 17:28:18.819047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.522 [2024-10-13 17:28:18.819056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.522 [2024-10-13 17:28:18.819073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.522 [2024-10-13 17:28:18.819082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.522 [2024-10-13 17:28:18.819091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.522 [2024-10-13 17:28:18.819099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.522 [2024-10-13 17:28:18.819109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.522 [2024-10-13 17:28:18.819117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.522 [2024-10-13 17:28:18.819126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.522 [2024-10-13 17:28:18.819134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.522 [2024-10-13 17:28:18.819143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.523 [2024-10-13 17:28:18.819936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.523 [2024-10-13 17:28:18.819944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.819954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.819962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.819975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.819983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.819993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.524 [2024-10-13 17:28:18.820249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.820310] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b70940 was disconnected and freed. reset controller. 00:18:10.524 [2024-10-13 17:28:18.821511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:10.524 17:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:10.524 17:28:18 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:10.524 17:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:10.524 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.524 task offset: 99712 on job bdev=Nvme0n1 fails 00:18:10.524 00:18:10.524 Latency(us) 00:18:10.524 [2024-10-13T15:28:19.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.524 [2024-10-13T15:28:19.048Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.524 [2024-10-13T15:28:19.048Z] Job: Nvme0n1 ended in about 0.51 seconds with error 00:18:10.524 Verification LBA range: start 0x0 length 0x400 00:18:10.524 Nvme0n1 : 0.51 3502.60 218.91 125.44 0.00 17336.03 1665.71 22500.69 00:18:10.524 [2024-10-13T15:28:19.048Z] =================================================================================================================== 00:18:10.524 [2024-10-13T15:28:19.048Z] Total : 3502.60 218.91 125.44 0.00 17336.03 1665.71 22500.69 00:18:10.524 [2024-10-13 17:28:18.823488] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:10.524 [2024-10-13 17:28:18.823513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73160 (9): Bad file descriptor 00:18:10.524 [2024-10-13 17:28:18.828468] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:18:10.524 [2024-10-13 17:28:18.828541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:10.524 [2024-10-13 17:28:18.828571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.524 [2024-10-13 17:28:18.828588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:18:10.524 [2024-10-13 17:28:18.828598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:18:10.524 [2024-10-13 17:28:18.828605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:18:10.524 [2024-10-13 17:28:18.828613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b73160 00:18:10.524 [2024-10-13 17:28:18.828632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73160 (9): Bad file descriptor 00:18:10.524 [2024-10-13 17:28:18.828645] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:10.524 [2024-10-13 17:28:18.828653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:10.524 [2024-10-13 17:28:18.828663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:10.524 [2024-10-13 17:28:18.828682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.524 17:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:10.524 17:28:18 -- target/host_management.sh@87 -- # sleep 1 00:18:11.465 17:28:19 -- target/host_management.sh@91 -- # kill -9 3162639 00:18:11.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3162639) - No such process 00:18:11.465 17:28:19 -- target/host_management.sh@91 -- # true 00:18:11.465 17:28:19 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:11.465 17:28:19 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:11.465 17:28:19 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:11.465 17:28:19 -- nvmf/common.sh@520 -- # config=() 00:18:11.465 17:28:19 -- nvmf/common.sh@520 -- # local subsystem config 00:18:11.465 17:28:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:11.465 17:28:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:11.465 { 00:18:11.465 "params": { 00:18:11.465 "name": "Nvme$subsystem", 00:18:11.465 "trtype": "$TEST_TRANSPORT", 00:18:11.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.465 "adrfam": "ipv4", 00:18:11.465 "trsvcid": "$NVMF_PORT", 00:18:11.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.465 "hdgst": ${hdgst:-false}, 00:18:11.465 "ddgst": ${ddgst:-false} 00:18:11.465 }, 00:18:11.465 "method": "bdev_nvme_attach_controller" 00:18:11.465 } 00:18:11.465 EOF 00:18:11.465 )") 00:18:11.465 17:28:19 -- nvmf/common.sh@542 -- # cat 00:18:11.465 17:28:19 -- nvmf/common.sh@544 -- # jq . 00:18:11.465 17:28:19 -- nvmf/common.sh@545 -- # IFS=, 00:18:11.465 17:28:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:11.465 "params": { 00:18:11.465 "name": "Nvme0", 00:18:11.465 "trtype": "tcp", 00:18:11.465 "traddr": "10.0.0.2", 00:18:11.465 "adrfam": "ipv4", 00:18:11.465 "trsvcid": "4420", 00:18:11.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:11.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:11.465 "hdgst": false, 00:18:11.465 "ddgst": false 00:18:11.465 }, 00:18:11.465 "method": "bdev_nvme_attach_controller" 00:18:11.465 }' 00:18:11.465 [2024-10-13 17:28:19.899470] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:11.465 [2024-10-13 17:28:19.899539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162995 ] 00:18:11.465 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.465 [2024-10-13 17:28:19.962227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.465 [2024-10-13 17:28:19.989049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.769 Running I/O for 1 seconds... 00:18:12.810 00:18:12.810 Latency(us) 00:18:12.810 [2024-10-13T15:28:21.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.810 [2024-10-13T15:28:21.334Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.810 Verification LBA range: start 0x0 length 0x400 00:18:12.810 Nvme0n1 : 1.01 3452.97 215.81 0.00 0.00 18269.96 2116.27 22282.24 00:18:12.810 [2024-10-13T15:28:21.334Z] =================================================================================================================== 00:18:12.810 [2024-10-13T15:28:21.334Z] Total : 3452.97 215.81 0.00 0.00 18269.96 2116.27 22282.24 00:18:13.069 17:28:21 -- target/host_management.sh@101 -- # stoptarget 00:18:13.069 17:28:21 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:13.069 17:28:21 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:13.070 17:28:21 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:13.070 17:28:21 -- target/host_management.sh@40 -- # nvmftestfini 00:18:13.070 17:28:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.070 17:28:21 -- nvmf/common.sh@116 -- # sync 00:18:13.070 17:28:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.070 17:28:21 -- nvmf/common.sh@119 -- # set +e 00:18:13.070 17:28:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.070 17:28:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.070 rmmod nvme_tcp 00:18:13.070 rmmod nvme_fabrics 00:18:13.070 rmmod nvme_keyring 00:18:13.070 17:28:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.070 17:28:21 -- nvmf/common.sh@123 -- # set -e 00:18:13.070 17:28:21 -- nvmf/common.sh@124 -- # return 0 00:18:13.070 17:28:21 -- nvmf/common.sh@477 -- # '[' -n 3162279 ']' 00:18:13.070 17:28:21 -- nvmf/common.sh@478 -- # killprocess 3162279 00:18:13.070 17:28:21 -- common/autotest_common.sh@926 -- # '[' -z 3162279 ']' 00:18:13.070 17:28:21 -- common/autotest_common.sh@930 -- # kill -0 3162279 00:18:13.070 17:28:21 -- common/autotest_common.sh@931 -- # uname 00:18:13.070 17:28:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.070 17:28:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3162279 00:18:13.070 17:28:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:13.070 17:28:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:13.070 17:28:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3162279' 00:18:13.070 killing process with pid 3162279 00:18:13.070 17:28:21 -- common/autotest_common.sh@945 -- # kill 3162279 00:18:13.070 17:28:21 -- common/autotest_common.sh@950 -- # wait 3162279 00:18:13.070 [2024-10-13 17:28:21.584838] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:13.329 17:28:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.329 17:28:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:13.329 17:28:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:13.330 17:28:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.330 17:28:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:13.330 17:28:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.330 17:28:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.330 17:28:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.241 17:28:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:15.241 00:18:15.241 real 0m6.774s 00:18:15.241 user 0m20.406s 00:18:15.241 sys 0m1.150s 00:18:15.241 17:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.241 17:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.241 ************************************ 00:18:15.241 END TEST nvmf_host_management 00:18:15.241 ************************************ 00:18:15.241 17:28:23 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:15.241 00:18:15.241 real 0m14.312s 00:18:15.241 user 0m22.379s 00:18:15.241 sys 0m6.656s 00:18:15.241 17:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.241 17:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.241 ************************************ 00:18:15.241 END TEST nvmf_host_management 00:18:15.241 ************************************ 00:18:15.241 17:28:23 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:15.241 17:28:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:15.241 17:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:15.241 17:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.241 ************************************ 00:18:15.241 START TEST nvmf_lvol 00:18:15.241 ************************************ 00:18:15.241 17:28:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:15.501 * Looking for test storage... 00:18:15.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.501 17:28:23 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.501 17:28:23 -- nvmf/common.sh@7 -- # uname -s 00:18:15.501 17:28:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.501 17:28:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.501 17:28:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.501 17:28:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.501 17:28:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.501 17:28:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.501 17:28:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.501 17:28:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.501 17:28:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.501 17:28:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.501 17:28:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.501 17:28:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.501 17:28:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.501 17:28:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.501 17:28:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.501 17:28:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.501 17:28:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.501 17:28:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.501 17:28:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.502 17:28:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.502 17:28:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.502 17:28:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.502 17:28:23 -- paths/export.sh@5 -- # export PATH 00:18:15.502 17:28:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.502 17:28:23 -- nvmf/common.sh@46 -- # : 0 00:18:15.502 17:28:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.502 17:28:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.502 17:28:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.502 17:28:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.502 17:28:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.502 17:28:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.502 17:28:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.502 17:28:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.502 17:28:23 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.502 17:28:23 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.502 17:28:23 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:15.502 17:28:23 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:15.502 17:28:23 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.502 17:28:23 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:15.502 17:28:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:15.502 17:28:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.502 17:28:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.502 17:28:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.502 17:28:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.502 17:28:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.502 17:28:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.502 17:28:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.502 17:28:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:15.502 17:28:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:15.502 17:28:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:15.502 17:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:23.699 17:28:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.699 17:28:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:23.699 17:28:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:23.699 17:28:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:23.699 17:28:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:23.699 17:28:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:23.699 17:28:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:23.699 17:28:30 -- nvmf/common.sh@294 -- # net_devs=() 00:18:23.699 17:28:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:23.699 17:28:30 -- nvmf/common.sh@295 -- # e810=() 00:18:23.699 17:28:30 -- nvmf/common.sh@295 -- # local -ga e810 00:18:23.699 17:28:30 -- nvmf/common.sh@296 -- # x722=() 00:18:23.699 17:28:30 -- nvmf/common.sh@296 -- # local -ga x722 00:18:23.699 17:28:30 -- nvmf/common.sh@297 -- # mlx=() 00:18:23.699 17:28:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:23.699 17:28:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.699 17:28:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:23.699 17:28:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:23.699 17:28:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:23.699 17:28:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:23.699 17:28:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:23.699 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:23.699 17:28:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:23.699 17:28:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:23.699 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:23.699 17:28:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:23.699 17:28:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:23.699 17:28:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.699 17:28:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:23.699 17:28:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.699 17:28:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:23.699 Found net devices under 0000:31:00.0: cvl_0_0 00:18:23.699 17:28:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.699 17:28:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:23.699 17:28:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.699 17:28:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:23.699 17:28:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.699 17:28:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:23.699 Found net devices under 0000:31:00.1: cvl_0_1 00:18:23.699 17:28:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.699 17:28:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:23.699 17:28:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:23.699 17:28:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:23.699 17:28:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:23.699 17:28:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.699 17:28:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.699 17:28:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.699 17:28:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:23.699 17:28:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.699 17:28:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.699 17:28:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:23.699 17:28:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.699 17:28:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.700 17:28:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:23.700 17:28:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:23.700 17:28:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.700 17:28:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.700 17:28:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.700 17:28:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.700 17:28:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:23.700 17:28:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.700 17:28:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.700 17:28:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.700 17:28:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:23.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:18:23.700 00:18:23.700 --- 10.0.0.2 ping statistics --- 00:18:23.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.700 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:18:23.700 17:28:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:18:23.700 00:18:23.700 --- 10.0.0.1 ping statistics --- 00:18:23.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.700 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:23.700 17:28:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.700 17:28:31 -- nvmf/common.sh@410 -- # return 0 00:18:23.700 17:28:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:23.700 17:28:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.700 17:28:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:23.700 17:28:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:23.700 17:28:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.700 17:28:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:23.700 17:28:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:23.700 17:28:31 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:23.700 17:28:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:23.700 17:28:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:23.700 17:28:31 -- common/autotest_common.sh@10 -- # set +x 00:18:23.700 17:28:31 -- nvmf/common.sh@469 -- # nvmfpid=3167492 00:18:23.700 17:28:31 -- nvmf/common.sh@470 -- # waitforlisten 3167492 00:18:23.700 17:28:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:23.700 17:28:31 -- common/autotest_common.sh@819 -- # '[' -z 3167492 ']' 00:18:23.700 17:28:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.700 17:28:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:23.700 17:28:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.700 17:28:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:23.700 17:28:31 -- common/autotest_common.sh@10 -- # set +x 00:18:23.700 [2024-10-13 17:28:31.309390] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:23.700 [2024-10-13 17:28:31.309460] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.700 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.700 [2024-10-13 17:28:31.385444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:23.700 [2024-10-13 17:28:31.422525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.700 [2024-10-13 17:28:31.422673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.700 [2024-10-13 17:28:31.422685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.700 [2024-10-13 17:28:31.422692] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.700 [2024-10-13 17:28:31.422845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.700 [2024-10-13 17:28:31.422963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.700 [2024-10-13 17:28:31.422965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.700 17:28:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:23.700 17:28:32 -- common/autotest_common.sh@852 -- # return 0 00:18:23.700 17:28:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:23.700 17:28:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:23.700 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:23.700 17:28:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.700 17:28:32 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:23.960 [2024-10-13 17:28:32.286517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.960 17:28:32 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.220 17:28:32 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:24.220 17:28:32 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.220 17:28:32 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:24.220 17:28:32 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:24.480 17:28:32 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:24.740 17:28:33 -- target/nvmf_lvol.sh@29 -- # lvs=16460aa5-052b-469d-afd2-9b0f8d2d2510 00:18:24.740 17:28:33 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16460aa5-052b-469d-afd2-9b0f8d2d2510 lvol 20 00:18:24.740 17:28:33 -- target/nvmf_lvol.sh@32 -- # lvol=9deedb90-528f-48de-b574-094baf5cec10 00:18:24.740 17:28:33 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:25.000 17:28:33 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9deedb90-528f-48de-b574-094baf5cec10 00:18:25.261 17:28:33 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:25.261 [2024-10-13 17:28:33.694055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.261 17:28:33 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:25.522 17:28:33 -- target/nvmf_lvol.sh@42 -- # perf_pid=3168135 00:18:25.522 17:28:33 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:25.522 17:28:33 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:25.522 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.464 17:28:34 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9deedb90-528f-48de-b574-094baf5cec10 MY_SNAPSHOT 00:18:26.726 17:28:35 -- target/nvmf_lvol.sh@47 -- # snapshot=0bda7049-88b5-411b-9833-3929c268cea1 00:18:26.726 17:28:35 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9deedb90-528f-48de-b574-094baf5cec10 30 00:18:26.987 17:28:35 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0bda7049-88b5-411b-9833-3929c268cea1 MY_CLONE 00:18:26.987 17:28:35 -- target/nvmf_lvol.sh@49 -- # clone=854521ff-088f-44f1-a34a-b5e51b93a848 00:18:26.987 17:28:35 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 854521ff-088f-44f1-a34a-b5e51b93a848 00:18:27.557 17:28:35 -- target/nvmf_lvol.sh@53 -- # wait 3168135 00:18:35.691 Initializing NVMe Controllers 00:18:35.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:35.691 Controller IO queue size 128, less than required. 00:18:35.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:35.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:35.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:35.691 Initialization complete. Launching workers. 00:18:35.691 ======================================================== 00:18:35.691 Latency(us) 00:18:35.691 Device Information : IOPS MiB/s Average min max 00:18:35.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12474.60 48.73 10262.94 1549.79 56777.56 00:18:35.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17695.90 69.12 7234.37 2039.11 49184.13 00:18:35.691 ======================================================== 00:18:35.691 Total : 30170.50 117.85 8486.59 1549.79 56777.56 00:18:35.691 00:18:35.691 17:28:44 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:35.952 17:28:44 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9deedb90-528f-48de-b574-094baf5cec10 00:18:36.211 17:28:44 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16460aa5-052b-469d-afd2-9b0f8d2d2510 00:18:36.471 17:28:44 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:36.471 17:28:44 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:36.471 17:28:44 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:36.471 17:28:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:36.471 17:28:44 -- nvmf/common.sh@116 -- # sync 00:18:36.471 17:28:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:36.471 17:28:44 -- nvmf/common.sh@119 -- # set +e 00:18:36.471 17:28:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:36.471 17:28:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:36.471 rmmod nvme_tcp 00:18:36.471 rmmod nvme_fabrics 00:18:36.471 rmmod nvme_keyring 00:18:36.471 17:28:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:36.471 17:28:44 -- nvmf/common.sh@123 -- # set -e 00:18:36.471 17:28:44 -- nvmf/common.sh@124 -- # return 0 00:18:36.471 17:28:44 -- nvmf/common.sh@477 -- # '[' -n 3167492 ']' 00:18:36.471 17:28:44 -- nvmf/common.sh@478 -- # killprocess 3167492 00:18:36.471 17:28:44 -- common/autotest_common.sh@926 -- # '[' -z 3167492 ']' 00:18:36.471 17:28:44 -- common/autotest_common.sh@930 -- # kill -0 3167492 00:18:36.471 17:28:44 -- common/autotest_common.sh@931 -- # uname 00:18:36.471 17:28:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:36.471 17:28:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3167492 00:18:36.471 17:28:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:36.471 17:28:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:36.471 17:28:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3167492' 00:18:36.471 killing process with pid 3167492 00:18:36.471 17:28:44 -- common/autotest_common.sh@945 -- # kill 3167492 00:18:36.471 17:28:44 -- common/autotest_common.sh@950 -- # wait 3167492 00:18:36.730 17:28:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.730 17:28:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:36.730 17:28:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:36.731 17:28:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.731 17:28:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:36.731 17:28:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.731 17:28:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.731 17:28:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.640 17:28:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:38.640 00:18:38.640 real 0m23.348s 00:18:38.640 user 1m3.558s 00:18:38.640 sys 0m8.256s 00:18:38.640 17:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.640 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:38.640 ************************************ 00:18:38.640 END TEST nvmf_lvol 00:18:38.640 ************************************ 00:18:38.640 17:28:47 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:38.640 17:28:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:38.640 17:28:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:38.640 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:38.640 ************************************ 00:18:38.640 START TEST nvmf_lvs_grow 00:18:38.640 ************************************ 00:18:38.640 17:28:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:38.901 * Looking for test storage... 00:18:38.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.901 17:28:47 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.901 17:28:47 -- nvmf/common.sh@7 -- # uname -s 00:18:38.901 17:28:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.901 17:28:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.901 17:28:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.902 17:28:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.902 17:28:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.902 17:28:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.902 17:28:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.902 17:28:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.902 17:28:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.902 17:28:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.902 17:28:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.902 17:28:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.902 17:28:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.902 17:28:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.902 17:28:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.902 17:28:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.902 17:28:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.902 17:28:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.902 17:28:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.902 17:28:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.902 17:28:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.902 17:28:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.902 17:28:47 -- paths/export.sh@5 -- # export PATH 00:18:38.902 17:28:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.902 17:28:47 -- nvmf/common.sh@46 -- # : 0 00:18:38.902 17:28:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:38.902 17:28:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:38.902 17:28:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:38.902 17:28:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.902 17:28:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.902 17:28:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:38.902 17:28:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:38.902 17:28:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:38.902 17:28:47 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.902 17:28:47 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.902 17:28:47 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:38.902 17:28:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:38.902 17:28:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.902 17:28:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:38.902 17:28:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:38.902 17:28:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:38.902 17:28:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.902 17:28:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.902 17:28:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.902 17:28:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:38.902 17:28:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:38.902 17:28:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:38.902 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:47.035 17:28:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:47.035 17:28:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:47.035 17:28:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:47.035 17:28:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:47.035 17:28:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:47.035 17:28:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:47.035 17:28:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:47.035 17:28:54 -- nvmf/common.sh@294 -- # net_devs=() 00:18:47.035 17:28:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:47.036 17:28:54 -- nvmf/common.sh@295 -- # e810=() 00:18:47.036 17:28:54 -- nvmf/common.sh@295 -- # local -ga e810 00:18:47.036 17:28:54 -- nvmf/common.sh@296 -- # x722=() 00:18:47.036 17:28:54 -- nvmf/common.sh@296 -- # local -ga x722 00:18:47.036 17:28:54 -- nvmf/common.sh@297 -- # mlx=() 00:18:47.036 17:28:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:47.036 17:28:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.036 17:28:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:47.036 17:28:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:47.036 17:28:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:47.036 17:28:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:47.036 17:28:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:47.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:47.036 17:28:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:47.036 17:28:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:47.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:47.036 17:28:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:47.036 17:28:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:47.036 17:28:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.036 17:28:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:47.036 17:28:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.036 17:28:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:47.036 Found net devices under 0000:31:00.0: cvl_0_0 00:18:47.036 17:28:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.036 17:28:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:47.036 17:28:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.036 17:28:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:47.036 17:28:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.036 17:28:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:47.036 Found net devices under 0000:31:00.1: cvl_0_1 00:18:47.036 17:28:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.036 17:28:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:47.036 17:28:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:47.036 17:28:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:47.036 17:28:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.036 17:28:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.036 17:28:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.036 17:28:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:47.036 17:28:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.036 17:28:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.036 17:28:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:47.036 17:28:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.036 17:28:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.036 17:28:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:47.036 17:28:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:47.036 17:28:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.036 17:28:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.036 17:28:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.036 17:28:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.036 17:28:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:47.036 17:28:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.036 17:28:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.036 17:28:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.036 17:28:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:47.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:18:47.036 00:18:47.036 --- 10.0.0.2 ping statistics --- 00:18:47.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.036 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:18:47.036 17:28:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:18:47.036 00:18:47.036 --- 10.0.0.1 ping statistics --- 00:18:47.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.036 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:18:47.036 17:28:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.036 17:28:54 -- nvmf/common.sh@410 -- # return 0 00:18:47.036 17:28:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:47.036 17:28:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.036 17:28:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:47.036 17:28:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.036 17:28:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:47.036 17:28:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:47.036 17:28:54 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:47.036 17:28:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:47.036 17:28:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:47.036 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:18:47.036 17:28:54 -- nvmf/common.sh@469 -- # nvmfpid=3174579 00:18:47.036 17:28:54 -- nvmf/common.sh@470 -- # waitforlisten 3174579 00:18:47.036 17:28:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:47.036 17:28:54 -- common/autotest_common.sh@819 -- # '[' -z 3174579 ']' 00:18:47.036 17:28:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.036 17:28:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:47.036 17:28:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.036 17:28:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:47.036 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:18:47.036 [2024-10-13 17:28:54.784220] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:47.036 [2024-10-13 17:28:54.784269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.036 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.036 [2024-10-13 17:28:54.852205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.036 [2024-10-13 17:28:54.880967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:47.036 [2024-10-13 17:28:54.881095] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.036 [2024-10-13 17:28:54.881105] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.036 [2024-10-13 17:28:54.881113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.036 [2024-10-13 17:28:54.881137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.297 17:28:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:47.297 17:28:55 -- common/autotest_common.sh@852 -- # return 0 00:18:47.297 17:28:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:47.297 17:28:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:47.297 17:28:55 -- common/autotest_common.sh@10 -- # set +x 00:18:47.297 17:28:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:47.297 [2024-10-13 17:28:55.753813] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:47.297 17:28:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:47.297 17:28:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.297 17:28:55 -- common/autotest_common.sh@10 -- # set +x 00:18:47.297 ************************************ 00:18:47.297 START TEST lvs_grow_clean 00:18:47.297 ************************************ 00:18:47.297 17:28:55 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.297 17:28:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:47.557 17:28:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:47.557 17:28:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:47.817 17:28:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4600c147-5bff-49b8-91a8-1828d65cec4a 00:18:47.817 17:28:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:18:47.817 17:28:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:47.817 17:28:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:47.817 17:28:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:47.817 17:28:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4600c147-5bff-49b8-91a8-1828d65cec4a lvol 150 00:18:48.077 17:28:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7407208c-c689-422a-8877-18e30796e224 00:18:48.077 17:28:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:48.077 17:28:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:48.336 [2024-10-13 17:28:56.632188] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:48.336 [2024-10-13 17:28:56.632242] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:48.336 true 00:18:48.337 17:28:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:18:48.337 17:28:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:48.337 17:28:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:48.337 17:28:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:48.597 17:28:56 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7407208c-c689-422a-8877-18e30796e224 00:18:48.597 17:28:57 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:48.857 [2024-10-13 17:28:57.238094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.857 17:28:57 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:49.117 17:28:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3175032 00:18:49.117 17:28:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.117 17:28:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:49.117 17:28:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3175032 /var/tmp/bdevperf.sock 00:18:49.117 17:28:57 -- common/autotest_common.sh@819 -- # '[' -z 3175032 ']' 00:18:49.117 17:28:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.117 17:28:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:49.117 17:28:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.117 17:28:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:49.117 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:18:49.117 [2024-10-13 17:28:57.442130] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:49.117 [2024-10-13 17:28:57.442179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175032 ] 00:18:49.117 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.117 [2024-10-13 17:28:57.520206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.117 [2024-10-13 17:28:57.549205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.688 17:28:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:49.688 17:28:58 -- common/autotest_common.sh@852 -- # return 0 00:18:49.688 17:28:58 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:50.257 Nvme0n1 00:18:50.257 17:28:58 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:50.257 [ 00:18:50.257 { 00:18:50.257 "name": "Nvme0n1", 00:18:50.257 "aliases": [ 00:18:50.257 "7407208c-c689-422a-8877-18e30796e224" 00:18:50.257 ], 00:18:50.257 "product_name": "NVMe disk", 00:18:50.257 "block_size": 4096, 00:18:50.257 "num_blocks": 38912, 00:18:50.257 "uuid": "7407208c-c689-422a-8877-18e30796e224", 00:18:50.257 "assigned_rate_limits": { 00:18:50.257 "rw_ios_per_sec": 0, 00:18:50.257 "rw_mbytes_per_sec": 0, 00:18:50.257 "r_mbytes_per_sec": 0, 00:18:50.257 "w_mbytes_per_sec": 0 00:18:50.257 }, 00:18:50.257 "claimed": false, 00:18:50.257 "zoned": false, 00:18:50.257 "supported_io_types": { 00:18:50.257 "read": true, 00:18:50.257 "write": true, 00:18:50.257 "unmap": true, 00:18:50.257 "write_zeroes": true, 00:18:50.257 "flush": true, 00:18:50.257 "reset": true, 00:18:50.257 "compare": true, 00:18:50.257 "compare_and_write": true, 00:18:50.257 "abort": true, 00:18:50.257 "nvme_admin": true, 00:18:50.257 "nvme_io": true 00:18:50.257 }, 00:18:50.257 "driver_specific": { 00:18:50.257 "nvme": [ 00:18:50.257 { 00:18:50.257 "trid": { 00:18:50.257 "trtype": "TCP", 00:18:50.257 "adrfam": "IPv4", 00:18:50.257 "traddr": "10.0.0.2", 00:18:50.257 "trsvcid": "4420", 00:18:50.257 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:50.257 }, 00:18:50.257 "ctrlr_data": { 00:18:50.258 "cntlid": 1, 00:18:50.258 "vendor_id": "0x8086", 00:18:50.258 "model_number": "SPDK bdev Controller", 00:18:50.258 "serial_number": "SPDK0", 00:18:50.258 "firmware_revision": "24.01.1", 00:18:50.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:50.258 "oacs": { 00:18:50.258 "security": 0, 00:18:50.258 "format": 0, 00:18:50.258 "firmware": 0, 00:18:50.258 "ns_manage": 0 00:18:50.258 }, 00:18:50.258 "multi_ctrlr": true, 00:18:50.258 "ana_reporting": false 00:18:50.258 }, 00:18:50.258 "vs": { 00:18:50.258 "nvme_version": "1.3" 00:18:50.258 }, 00:18:50.258 "ns_data": { 00:18:50.258 "id": 1, 00:18:50.258 "can_share": true 00:18:50.258 } 00:18:50.258 } 00:18:50.258 ], 00:18:50.258 "mp_policy": "active_passive" 00:18:50.258 } 00:18:50.258 } 00:18:50.258 ] 00:18:50.258 17:28:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3175306 00:18:50.258 17:28:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:50.258 17:28:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.517 Running I/O for 10 seconds... 00:18:51.457 Latency(us) 00:18:51.457 [2024-10-13T15:28:59.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.457 [2024-10-13T15:28:59.981Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.457 Nvme0n1 : 1.00 18433.00 72.00 0.00 0.00 0.00 0.00 0.00 00:18:51.457 [2024-10-13T15:28:59.981Z] =================================================================================================================== 00:18:51.457 [2024-10-13T15:28:59.981Z] Total : 18433.00 72.00 0.00 0.00 0.00 0.00 0.00 00:18:51.457 00:18:52.397 17:29:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:18:52.397 [2024-10-13T15:29:00.921Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.397 Nvme0n1 : 2.00 18591.00 72.62 0.00 0.00 0.00 0.00 0.00 00:18:52.397 [2024-10-13T15:29:00.921Z] =================================================================================================================== 00:18:52.397 [2024-10-13T15:29:00.921Z] Total : 18591.00 72.62 0.00 0.00 0.00 0.00 0.00 00:18:52.397 00:18:52.397 true 00:18:52.397 17:29:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:18:52.397 17:29:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:52.658 17:29:01 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:52.658 17:29:01 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:52.658 17:29:01 -- target/nvmf_lvs_grow.sh@65 -- # wait 3175306 00:18:53.598 [2024-10-13T15:29:02.122Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.598 Nvme0n1 : 3.00 18642.33 72.82 0.00 0.00 0.00 0.00 0.00 00:18:53.598 [2024-10-13T15:29:02.122Z] =================================================================================================================== 00:18:53.598 [2024-10-13T15:29:02.123Z] Total : 18642.33 72.82 0.00 0.00 0.00 0.00 0.00 00:18:53.599 00:18:54.538 [2024-10-13T15:29:03.062Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.538 Nvme0n1 : 4.00 18668.75 72.92 0.00 0.00 0.00 0.00 0.00 00:18:54.538 [2024-10-13T15:29:03.062Z] =================================================================================================================== 00:18:54.538 [2024-10-13T15:29:03.062Z] Total : 18668.75 72.92 0.00 0.00 0.00 0.00 0.00 00:18:54.538 00:18:55.478 [2024-10-13T15:29:04.002Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.478 Nvme0n1 : 5.00 18709.80 73.09 0.00 0.00 0.00 0.00 0.00 00:18:55.478 [2024-10-13T15:29:04.002Z] =================================================================================================================== 00:18:55.478 [2024-10-13T15:29:04.002Z] Total : 18709.80 73.09 0.00 0.00 0.00 0.00 0.00 00:18:55.478 00:18:56.418 [2024-10-13T15:29:04.942Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.418 Nvme0n1 : 6.00 18727.33 73.15 0.00 0.00 0.00 0.00 0.00 00:18:56.418 [2024-10-13T15:29:04.942Z] =================================================================================================================== 00:18:56.418 [2024-10-13T15:29:04.942Z] Total : 18727.33 73.15 0.00 0.00 0.00 0.00 0.00 00:18:56.418 00:18:57.359 [2024-10-13T15:29:05.883Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.359 Nvme0n1 : 7.00 18748.71 73.24 0.00 0.00 0.00 0.00 0.00 00:18:57.359 [2024-10-13T15:29:05.883Z] =================================================================================================================== 00:18:57.359 [2024-10-13T15:29:05.883Z] Total : 18748.71 73.24 0.00 0.00 0.00 0.00 0.00 00:18:57.359 00:18:58.742 [2024-10-13T15:29:07.266Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.742 Nvme0n1 : 8.00 18764.50 73.30 0.00 0.00 0.00 0.00 0.00 00:18:58.742 [2024-10-13T15:29:07.266Z] =================================================================================================================== 00:18:58.742 [2024-10-13T15:29:07.266Z] Total : 18764.50 73.30 0.00 0.00 0.00 0.00 0.00 00:18:58.742 00:18:59.312 [2024-10-13T15:29:07.836Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.312 Nvme0n1 : 9.00 18777.00 73.35 0.00 0.00 0.00 0.00 0.00 00:18:59.312 [2024-10-13T15:29:07.836Z] =================================================================================================================== 00:18:59.312 [2024-10-13T15:29:07.836Z] Total : 18777.00 73.35 0.00 0.00 0.00 0.00 0.00 00:18:59.312 00:19:00.694 [2024-10-13T15:29:09.218Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.694 Nvme0n1 : 10.00 18786.70 73.39 0.00 0.00 0.00 0.00 0.00 00:19:00.694 [2024-10-13T15:29:09.218Z] =================================================================================================================== 00:19:00.694 [2024-10-13T15:29:09.218Z] Total : 18786.70 73.39 0.00 0.00 0.00 0.00 0.00 00:19:00.694 00:19:00.694 00:19:00.694 Latency(us) 00:19:00.694 [2024-10-13T15:29:09.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.694 [2024-10-13T15:29:09.218Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.694 Nvme0n1 : 10.01 18786.89 73.39 0.00 0.00 6809.38 3986.77 16493.23 00:19:00.694 [2024-10-13T15:29:09.218Z] =================================================================================================================== 00:19:00.694 [2024-10-13T15:29:09.218Z] Total : 18786.89 73.39 0.00 0.00 6809.38 3986.77 16493.23 00:19:00.694 0 00:19:00.694 17:29:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3175032 00:19:00.694 17:29:08 -- common/autotest_common.sh@926 -- # '[' -z 3175032 ']' 00:19:00.694 17:29:08 -- common/autotest_common.sh@930 -- # kill -0 3175032 00:19:00.694 17:29:08 -- common/autotest_common.sh@931 -- # uname 00:19:00.694 17:29:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.694 17:29:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3175032 00:19:00.694 17:29:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:00.694 17:29:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:00.694 17:29:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3175032' 00:19:00.694 killing process with pid 3175032 00:19:00.694 17:29:08 -- common/autotest_common.sh@945 -- # kill 3175032 00:19:00.694 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.694 00:19:00.694 Latency(us) 00:19:00.694 [2024-10-13T15:29:09.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.694 [2024-10-13T15:29:09.218Z] =================================================================================================================== 00:19:00.694 [2024-10-13T15:29:09.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.694 17:29:08 -- common/autotest_common.sh@950 -- # wait 3175032 00:19:00.694 17:29:09 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:00.694 17:29:09 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:00.694 17:29:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:00.953 17:29:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:00.953 17:29:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:00.953 17:29:09 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:01.214 [2024-10-13 17:29:09.504219] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:01.214 17:29:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:01.214 17:29:09 -- common/autotest_common.sh@640 -- # local es=0 00:19:01.214 17:29:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:01.214 17:29:09 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.214 17:29:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.214 17:29:09 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.214 17:29:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.214 17:29:09 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.214 17:29:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.214 17:29:09 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.214 17:29:09 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:01.215 17:29:09 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:01.215 request: 00:19:01.215 { 00:19:01.215 "uuid": "4600c147-5bff-49b8-91a8-1828d65cec4a", 00:19:01.215 "method": "bdev_lvol_get_lvstores", 00:19:01.215 "req_id": 1 00:19:01.215 } 00:19:01.215 Got JSON-RPC error response 00:19:01.215 response: 00:19:01.215 { 00:19:01.215 "code": -19, 00:19:01.215 "message": "No such device" 00:19:01.215 } 00:19:01.215 17:29:09 -- common/autotest_common.sh@643 -- # es=1 00:19:01.215 17:29:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:01.215 17:29:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:01.215 17:29:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:01.215 17:29:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:01.475 aio_bdev 00:19:01.475 17:29:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7407208c-c689-422a-8877-18e30796e224 00:19:01.475 17:29:09 -- common/autotest_common.sh@887 -- # local bdev_name=7407208c-c689-422a-8877-18e30796e224 00:19:01.475 17:29:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.475 17:29:09 -- common/autotest_common.sh@889 -- # local i 00:19:01.475 17:29:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.475 17:29:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.475 17:29:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:01.736 17:29:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7407208c-c689-422a-8877-18e30796e224 -t 2000 00:19:01.736 [ 00:19:01.736 { 00:19:01.736 "name": "7407208c-c689-422a-8877-18e30796e224", 00:19:01.736 "aliases": [ 00:19:01.736 "lvs/lvol" 00:19:01.736 ], 00:19:01.736 "product_name": "Logical Volume", 00:19:01.736 "block_size": 4096, 00:19:01.736 "num_blocks": 38912, 00:19:01.736 "uuid": "7407208c-c689-422a-8877-18e30796e224", 00:19:01.736 "assigned_rate_limits": { 00:19:01.736 "rw_ios_per_sec": 0, 00:19:01.736 "rw_mbytes_per_sec": 0, 00:19:01.736 "r_mbytes_per_sec": 0, 00:19:01.736 "w_mbytes_per_sec": 0 00:19:01.736 }, 00:19:01.736 "claimed": false, 00:19:01.736 "zoned": false, 00:19:01.736 "supported_io_types": { 00:19:01.736 "read": true, 00:19:01.736 "write": true, 00:19:01.736 "unmap": true, 00:19:01.736 "write_zeroes": true, 00:19:01.736 "flush": false, 00:19:01.736 "reset": true, 00:19:01.736 "compare": false, 00:19:01.736 "compare_and_write": false, 00:19:01.736 "abort": false, 00:19:01.736 "nvme_admin": false, 00:19:01.736 "nvme_io": false 00:19:01.736 }, 00:19:01.736 "driver_specific": { 00:19:01.736 "lvol": { 00:19:01.736 "lvol_store_uuid": "4600c147-5bff-49b8-91a8-1828d65cec4a", 00:19:01.736 "base_bdev": "aio_bdev", 00:19:01.736 "thin_provision": false, 00:19:01.736 "snapshot": false, 00:19:01.736 "clone": false, 00:19:01.736 "esnap_clone": false 00:19:01.736 } 00:19:01.736 } 00:19:01.736 } 00:19:01.736 ] 00:19:01.736 17:29:10 -- common/autotest_common.sh@895 -- # return 0 00:19:01.736 17:29:10 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:01.736 17:29:10 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:01.996 17:29:10 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:01.997 17:29:10 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:01.997 17:29:10 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:01.997 17:29:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:01.997 17:29:10 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7407208c-c689-422a-8877-18e30796e224 00:19:02.257 17:29:10 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4600c147-5bff-49b8-91a8-1828d65cec4a 00:19:02.518 17:29:10 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:02.518 17:29:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:02.518 00:19:02.518 real 0m15.210s 00:19:02.518 user 0m14.962s 00:19:02.518 sys 0m1.271s 00:19:02.518 17:29:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.518 17:29:11 -- common/autotest_common.sh@10 -- # set +x 00:19:02.518 ************************************ 00:19:02.518 END TEST lvs_grow_clean 00:19:02.518 ************************************ 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:02.778 17:29:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:02.778 17:29:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:02.778 17:29:11 -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 ************************************ 00:19:02.778 START TEST lvs_grow_dirty 00:19:02.778 ************************************ 00:19:02.778 17:29:11 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:02.778 17:29:11 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:03.039 17:29:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:03.039 17:29:11 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:03.039 17:29:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:03.299 17:29:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:03.299 17:29:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:03.299 17:29:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a391bfc5-612f-42bb-9290-864ecf7d1f96 lvol 150 00:19:03.299 17:29:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:03.299 17:29:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:03.299 17:29:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:03.559 [2024-10-13 17:29:11.884147] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:03.559 [2024-10-13 17:29:11.884199] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:03.559 true 00:19:03.560 17:29:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:03.560 17:29:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:03.560 17:29:12 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:03.560 17:29:12 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:03.820 17:29:12 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:03.820 17:29:12 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:04.080 17:29:12 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:04.340 17:29:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3178078 00:19:04.340 17:29:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.340 17:29:12 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:04.340 17:29:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3178078 /var/tmp/bdevperf.sock 00:19:04.340 17:29:12 -- common/autotest_common.sh@819 -- # '[' -z 3178078 ']' 00:19:04.340 17:29:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.340 17:29:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:04.340 17:29:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.340 17:29:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:04.340 17:29:12 -- common/autotest_common.sh@10 -- # set +x 00:19:04.340 [2024-10-13 17:29:12.703362] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:04.340 [2024-10-13 17:29:12.703412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178078 ] 00:19:04.340 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.340 [2024-10-13 17:29:12.781116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.340 [2024-10-13 17:29:12.808134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.283 17:29:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:05.283 17:29:13 -- common/autotest_common.sh@852 -- # return 0 00:19:05.283 17:29:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:05.283 Nvme0n1 00:19:05.542 17:29:13 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:05.542 [ 00:19:05.542 { 00:19:05.542 "name": "Nvme0n1", 00:19:05.542 "aliases": [ 00:19:05.542 "b8726e22-497d-4024-a4b2-f0bbb3b92782" 00:19:05.542 ], 00:19:05.542 "product_name": "NVMe disk", 00:19:05.542 "block_size": 4096, 00:19:05.542 "num_blocks": 38912, 00:19:05.542 "uuid": "b8726e22-497d-4024-a4b2-f0bbb3b92782", 00:19:05.542 "assigned_rate_limits": { 00:19:05.542 "rw_ios_per_sec": 0, 00:19:05.542 "rw_mbytes_per_sec": 0, 00:19:05.542 "r_mbytes_per_sec": 0, 00:19:05.542 "w_mbytes_per_sec": 0 00:19:05.542 }, 00:19:05.542 "claimed": false, 00:19:05.543 "zoned": false, 00:19:05.543 "supported_io_types": { 00:19:05.543 "read": true, 00:19:05.543 "write": true, 00:19:05.543 "unmap": true, 00:19:05.543 "write_zeroes": true, 00:19:05.543 "flush": true, 00:19:05.543 "reset": true, 00:19:05.543 "compare": true, 00:19:05.543 "compare_and_write": true, 00:19:05.543 "abort": true, 00:19:05.543 "nvme_admin": true, 00:19:05.543 "nvme_io": true 00:19:05.543 }, 00:19:05.543 "driver_specific": { 00:19:05.543 "nvme": [ 00:19:05.543 { 00:19:05.543 "trid": { 00:19:05.543 "trtype": "TCP", 00:19:05.543 "adrfam": "IPv4", 00:19:05.543 "traddr": "10.0.0.2", 00:19:05.543 "trsvcid": "4420", 00:19:05.543 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:05.543 }, 00:19:05.543 "ctrlr_data": { 00:19:05.543 "cntlid": 1, 00:19:05.543 "vendor_id": "0x8086", 00:19:05.543 "model_number": "SPDK bdev Controller", 00:19:05.543 "serial_number": "SPDK0", 00:19:05.543 "firmware_revision": "24.01.1", 00:19:05.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.543 "oacs": { 00:19:05.543 "security": 0, 00:19:05.543 "format": 0, 00:19:05.543 "firmware": 0, 00:19:05.543 "ns_manage": 0 00:19:05.543 }, 00:19:05.543 "multi_ctrlr": true, 00:19:05.543 "ana_reporting": false 00:19:05.543 }, 00:19:05.543 "vs": { 00:19:05.543 "nvme_version": "1.3" 00:19:05.543 }, 00:19:05.543 "ns_data": { 00:19:05.543 "id": 1, 00:19:05.543 "can_share": true 00:19:05.543 } 00:19:05.543 } 00:19:05.543 ], 00:19:05.543 "mp_policy": "active_passive" 00:19:05.543 } 00:19:05.543 } 00:19:05.543 ] 00:19:05.543 17:29:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3178412 00:19:05.543 17:29:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:05.543 17:29:13 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.803 Running I/O for 10 seconds... 00:19:06.744 Latency(us) 00:19:06.744 [2024-10-13T15:29:15.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.744 [2024-10-13T15:29:15.268Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.744 Nvme0n1 : 1.00 18432.00 72.00 0.00 0.00 0.00 0.00 0.00 00:19:06.744 [2024-10-13T15:29:15.268Z] =================================================================================================================== 00:19:06.744 [2024-10-13T15:29:15.268Z] Total : 18432.00 72.00 0.00 0.00 0.00 0.00 0.00 00:19:06.744 00:19:07.685 17:29:15 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:07.685 [2024-10-13T15:29:16.209Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:07.685 Nvme0n1 : 2.00 18558.00 72.49 0.00 0.00 0.00 0.00 0.00 00:19:07.685 [2024-10-13T15:29:16.209Z] =================================================================================================================== 00:19:07.685 [2024-10-13T15:29:16.209Z] Total : 18558.00 72.49 0.00 0.00 0.00 0.00 0.00 00:19:07.685 00:19:07.685 true 00:19:07.685 17:29:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:07.685 17:29:16 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:07.946 17:29:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:07.946 17:29:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:07.946 17:29:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 3178412 00:19:08.966 [2024-10-13T15:29:17.490Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:08.966 Nvme0n1 : 3.00 18622.33 72.74 0.00 0.00 0.00 0.00 0.00 00:19:08.966 [2024-10-13T15:29:17.490Z] =================================================================================================================== 00:19:08.966 [2024-10-13T15:29:17.490Z] Total : 18622.33 72.74 0.00 0.00 0.00 0.00 0.00 00:19:08.966 00:19:09.580 [2024-10-13T15:29:18.104Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.580 Nvme0n1 : 4.00 18655.00 72.87 0.00 0.00 0.00 0.00 0.00 00:19:09.580 [2024-10-13T15:29:18.104Z] =================================================================================================================== 00:19:09.580 [2024-10-13T15:29:18.104Z] Total : 18655.00 72.87 0.00 0.00 0.00 0.00 0.00 00:19:09.580 00:19:10.966 [2024-10-13T15:29:19.490Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:10.966 Nvme0n1 : 5.00 18686.20 72.99 0.00 0.00 0.00 0.00 0.00 00:19:10.966 [2024-10-13T15:29:19.490Z] =================================================================================================================== 00:19:10.966 [2024-10-13T15:29:19.490Z] Total : 18686.20 72.99 0.00 0.00 0.00 0.00 0.00 00:19:10.966 00:19:11.907 [2024-10-13T15:29:20.431Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:11.907 Nvme0n1 : 6.00 18718.00 73.12 0.00 0.00 0.00 0.00 0.00 00:19:11.907 [2024-10-13T15:29:20.431Z] =================================================================================================================== 00:19:11.907 [2024-10-13T15:29:20.431Z] Total : 18718.00 73.12 0.00 0.00 0.00 0.00 0.00 00:19:11.907 00:19:12.850 [2024-10-13T15:29:21.374Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.850 Nvme0n1 : 7.00 18731.86 73.17 0.00 0.00 0.00 0.00 0.00 00:19:12.850 [2024-10-13T15:29:21.374Z] =================================================================================================================== 00:19:12.850 [2024-10-13T15:29:21.374Z] Total : 18731.86 73.17 0.00 0.00 0.00 0.00 0.00 00:19:12.850 00:19:13.793 [2024-10-13T15:29:22.317Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:13.793 Nvme0n1 : 8.00 18749.88 73.24 0.00 0.00 0.00 0.00 0.00 00:19:13.793 [2024-10-13T15:29:22.317Z] =================================================================================================================== 00:19:13.793 [2024-10-13T15:29:22.317Z] Total : 18749.88 73.24 0.00 0.00 0.00 0.00 0.00 00:19:13.793 00:19:14.733 [2024-10-13T15:29:23.257Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:14.733 Nvme0n1 : 9.00 18764.11 73.30 0.00 0.00 0.00 0.00 0.00 00:19:14.733 [2024-10-13T15:29:23.257Z] =================================================================================================================== 00:19:14.733 [2024-10-13T15:29:23.257Z] Total : 18764.11 73.30 0.00 0.00 0.00 0.00 0.00 00:19:14.733 00:19:15.674 [2024-10-13T15:29:24.198Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.674 Nvme0n1 : 10.00 18775.70 73.34 0.00 0.00 0.00 0.00 0.00 00:19:15.674 [2024-10-13T15:29:24.198Z] =================================================================================================================== 00:19:15.674 [2024-10-13T15:29:24.198Z] Total : 18775.70 73.34 0.00 0.00 0.00 0.00 0.00 00:19:15.674 00:19:15.674 00:19:15.674 Latency(us) 00:19:15.674 [2024-10-13T15:29:24.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.674 [2024-10-13T15:29:24.198Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.674 Nvme0n1 : 10.01 18775.97 73.34 0.00 0.00 6813.81 4096.00 15619.41 00:19:15.674 [2024-10-13T15:29:24.198Z] =================================================================================================================== 00:19:15.674 [2024-10-13T15:29:24.198Z] Total : 18775.97 73.34 0.00 0.00 6813.81 4096.00 15619.41 00:19:15.674 0 00:19:15.674 17:29:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3178078 00:19:15.674 17:29:24 -- common/autotest_common.sh@926 -- # '[' -z 3178078 ']' 00:19:15.674 17:29:24 -- common/autotest_common.sh@930 -- # kill -0 3178078 00:19:15.674 17:29:24 -- common/autotest_common.sh@931 -- # uname 00:19:15.674 17:29:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.674 17:29:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3178078 00:19:15.674 17:29:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:15.674 17:29:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:15.674 17:29:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3178078' 00:19:15.674 killing process with pid 3178078 00:19:15.674 17:29:24 -- common/autotest_common.sh@945 -- # kill 3178078 00:19:15.674 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.674 00:19:15.674 Latency(us) 00:19:15.674 [2024-10-13T15:29:24.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.674 [2024-10-13T15:29:24.198Z] =================================================================================================================== 00:19:15.674 [2024-10-13T15:29:24.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.674 17:29:24 -- common/autotest_common.sh@950 -- # wait 3178078 00:19:15.934 17:29:24 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3174579 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@74 -- # wait 3174579 00:19:16.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3174579 Killed "${NVMF_APP[@]}" "$@" 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:16.195 17:29:24 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:16.195 17:29:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:16.195 17:29:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:16.195 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:19:16.455 17:29:24 -- nvmf/common.sh@469 -- # nvmfpid=3180469 00:19:16.455 17:29:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:16.455 17:29:24 -- nvmf/common.sh@470 -- # waitforlisten 3180469 00:19:16.455 17:29:24 -- common/autotest_common.sh@819 -- # '[' -z 3180469 ']' 00:19:16.455 17:29:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.455 17:29:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.455 17:29:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.455 17:29:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.455 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:19:16.455 [2024-10-13 17:29:24.783385] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:16.455 [2024-10-13 17:29:24.783476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.455 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.455 [2024-10-13 17:29:24.858004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.455 [2024-10-13 17:29:24.891408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:16.455 [2024-10-13 17:29:24.891529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.455 [2024-10-13 17:29:24.891538] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.455 [2024-10-13 17:29:24.891546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.455 [2024-10-13 17:29:24.891566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.025 17:29:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.025 17:29:25 -- common/autotest_common.sh@852 -- # return 0 00:19:17.025 17:29:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:17.025 17:29:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:17.025 17:29:25 -- common/autotest_common.sh@10 -- # set +x 00:19:17.284 17:29:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.285 17:29:25 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:17.285 [2024-10-13 17:29:25.731278] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:17.285 [2024-10-13 17:29:25.731366] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:17.285 [2024-10-13 17:29:25.731395] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:17.285 17:29:25 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:17.285 17:29:25 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:17.285 17:29:25 -- common/autotest_common.sh@887 -- # local bdev_name=b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:17.285 17:29:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.285 17:29:25 -- common/autotest_common.sh@889 -- # local i 00:19:17.285 17:29:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.285 17:29:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.285 17:29:25 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:17.544 17:29:25 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8726e22-497d-4024-a4b2-f0bbb3b92782 -t 2000 00:19:17.544 [ 00:19:17.544 { 00:19:17.544 "name": "b8726e22-497d-4024-a4b2-f0bbb3b92782", 00:19:17.544 "aliases": [ 00:19:17.544 "lvs/lvol" 00:19:17.544 ], 00:19:17.544 "product_name": "Logical Volume", 00:19:17.544 "block_size": 4096, 00:19:17.544 "num_blocks": 38912, 00:19:17.544 "uuid": "b8726e22-497d-4024-a4b2-f0bbb3b92782", 00:19:17.544 "assigned_rate_limits": { 00:19:17.544 "rw_ios_per_sec": 0, 00:19:17.544 "rw_mbytes_per_sec": 0, 00:19:17.544 "r_mbytes_per_sec": 0, 00:19:17.544 "w_mbytes_per_sec": 0 00:19:17.544 }, 00:19:17.544 "claimed": false, 00:19:17.544 "zoned": false, 00:19:17.544 "supported_io_types": { 00:19:17.544 "read": true, 00:19:17.544 "write": true, 00:19:17.544 "unmap": true, 00:19:17.544 "write_zeroes": true, 00:19:17.544 "flush": false, 00:19:17.544 "reset": true, 00:19:17.544 "compare": false, 00:19:17.544 "compare_and_write": false, 00:19:17.544 "abort": false, 00:19:17.544 "nvme_admin": false, 00:19:17.544 "nvme_io": false 00:19:17.544 }, 00:19:17.544 "driver_specific": { 00:19:17.544 "lvol": { 00:19:17.544 "lvol_store_uuid": "a391bfc5-612f-42bb-9290-864ecf7d1f96", 00:19:17.544 "base_bdev": "aio_bdev", 00:19:17.544 "thin_provision": false, 00:19:17.544 "snapshot": false, 00:19:17.544 "clone": false, 00:19:17.544 "esnap_clone": false 00:19:17.544 } 00:19:17.544 } 00:19:17.544 } 00:19:17.544 ] 00:19:17.544 17:29:26 -- common/autotest_common.sh@895 -- # return 0 00:19:17.544 17:29:26 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:17.544 17:29:26 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:17.805 17:29:26 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:17.805 17:29:26 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:17.805 17:29:26 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:18.064 17:29:26 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:18.065 17:29:26 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:18.065 [2024-10-13 17:29:26.515312] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:18.065 17:29:26 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:18.065 17:29:26 -- common/autotest_common.sh@640 -- # local es=0 00:19:18.065 17:29:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:18.065 17:29:26 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.065 17:29:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.065 17:29:26 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.065 17:29:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.065 17:29:26 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.065 17:29:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.065 17:29:26 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.065 17:29:26 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:18.065 17:29:26 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:18.325 request: 00:19:18.325 { 00:19:18.325 "uuid": "a391bfc5-612f-42bb-9290-864ecf7d1f96", 00:19:18.325 "method": "bdev_lvol_get_lvstores", 00:19:18.325 "req_id": 1 00:19:18.325 } 00:19:18.325 Got JSON-RPC error response 00:19:18.325 response: 00:19:18.325 { 00:19:18.325 "code": -19, 00:19:18.325 "message": "No such device" 00:19:18.325 } 00:19:18.325 17:29:26 -- common/autotest_common.sh@643 -- # es=1 00:19:18.325 17:29:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:18.325 17:29:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:18.325 17:29:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:18.325 17:29:26 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:18.586 aio_bdev 00:19:18.586 17:29:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:18.586 17:29:26 -- common/autotest_common.sh@887 -- # local bdev_name=b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:18.586 17:29:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:18.586 17:29:26 -- common/autotest_common.sh@889 -- # local i 00:19:18.586 17:29:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:18.586 17:29:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:18.586 17:29:26 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:18.586 17:29:27 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8726e22-497d-4024-a4b2-f0bbb3b92782 -t 2000 00:19:18.847 [ 00:19:18.847 { 00:19:18.847 "name": "b8726e22-497d-4024-a4b2-f0bbb3b92782", 00:19:18.847 "aliases": [ 00:19:18.847 "lvs/lvol" 00:19:18.847 ], 00:19:18.847 "product_name": "Logical Volume", 00:19:18.847 "block_size": 4096, 00:19:18.847 "num_blocks": 38912, 00:19:18.847 "uuid": "b8726e22-497d-4024-a4b2-f0bbb3b92782", 00:19:18.847 "assigned_rate_limits": { 00:19:18.847 "rw_ios_per_sec": 0, 00:19:18.847 "rw_mbytes_per_sec": 0, 00:19:18.847 "r_mbytes_per_sec": 0, 00:19:18.847 "w_mbytes_per_sec": 0 00:19:18.847 }, 00:19:18.847 "claimed": false, 00:19:18.847 "zoned": false, 00:19:18.847 "supported_io_types": { 00:19:18.847 "read": true, 00:19:18.847 "write": true, 00:19:18.847 "unmap": true, 00:19:18.847 "write_zeroes": true, 00:19:18.847 "flush": false, 00:19:18.847 "reset": true, 00:19:18.847 "compare": false, 00:19:18.847 "compare_and_write": false, 00:19:18.847 "abort": false, 00:19:18.847 "nvme_admin": false, 00:19:18.847 "nvme_io": false 00:19:18.847 }, 00:19:18.847 "driver_specific": { 00:19:18.847 "lvol": { 00:19:18.847 "lvol_store_uuid": "a391bfc5-612f-42bb-9290-864ecf7d1f96", 00:19:18.847 "base_bdev": "aio_bdev", 00:19:18.847 "thin_provision": false, 00:19:18.847 "snapshot": false, 00:19:18.847 "clone": false, 00:19:18.847 "esnap_clone": false 00:19:18.847 } 00:19:18.847 } 00:19:18.847 } 00:19:18.847 ] 00:19:18.847 17:29:27 -- common/autotest_common.sh@895 -- # return 0 00:19:18.847 17:29:27 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:18.847 17:29:27 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:18.847 17:29:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:18.847 17:29:27 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:18.847 17:29:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:19.109 17:29:27 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:19.109 17:29:27 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8726e22-497d-4024-a4b2-f0bbb3b92782 00:19:19.370 17:29:27 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a391bfc5-612f-42bb-9290-864ecf7d1f96 00:19:19.370 17:29:27 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:19.638 17:29:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:19.638 00:19:19.638 real 0m16.991s 00:19:19.638 user 0m44.248s 00:19:19.638 sys 0m2.923s 00:19:19.638 17:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.638 17:29:28 -- common/autotest_common.sh@10 -- # set +x 00:19:19.638 ************************************ 00:19:19.638 END TEST lvs_grow_dirty 00:19:19.638 ************************************ 00:19:19.638 17:29:28 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:19.638 17:29:28 -- common/autotest_common.sh@796 -- # type=--id 00:19:19.638 17:29:28 -- common/autotest_common.sh@797 -- # id=0 00:19:19.638 17:29:28 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:19.638 17:29:28 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:19.638 17:29:28 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:19.638 17:29:28 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:19.638 17:29:28 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:19.638 17:29:28 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:19.638 nvmf_trace.0 00:19:19.638 17:29:28 -- common/autotest_common.sh@811 -- # return 0 00:19:19.638 17:29:28 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:19.638 17:29:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:19.638 17:29:28 -- nvmf/common.sh@116 -- # sync 00:19:19.638 17:29:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:19.638 17:29:28 -- nvmf/common.sh@119 -- # set +e 00:19:19.638 17:29:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:19.638 17:29:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:19.638 rmmod nvme_tcp 00:19:19.638 rmmod nvme_fabrics 00:19:19.898 rmmod nvme_keyring 00:19:19.898 17:29:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:19.898 17:29:28 -- nvmf/common.sh@123 -- # set -e 00:19:19.898 17:29:28 -- nvmf/common.sh@124 -- # return 0 00:19:19.898 17:29:28 -- nvmf/common.sh@477 -- # '[' -n 3180469 ']' 00:19:19.898 17:29:28 -- nvmf/common.sh@478 -- # killprocess 3180469 00:19:19.898 17:29:28 -- common/autotest_common.sh@926 -- # '[' -z 3180469 ']' 00:19:19.898 17:29:28 -- common/autotest_common.sh@930 -- # kill -0 3180469 00:19:19.898 17:29:28 -- common/autotest_common.sh@931 -- # uname 00:19:19.898 17:29:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:19.898 17:29:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3180469 00:19:19.898 17:29:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:19.898 17:29:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:19.898 17:29:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3180469' 00:19:19.898 killing process with pid 3180469 00:19:19.898 17:29:28 -- common/autotest_common.sh@945 -- # kill 3180469 00:19:19.898 17:29:28 -- common/autotest_common.sh@950 -- # wait 3180469 00:19:19.898 17:29:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:19.898 17:29:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:19.898 17:29:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:19.898 17:29:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.898 17:29:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:19.898 17:29:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.898 17:29:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.898 17:29:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.444 17:29:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:22.444 00:19:22.444 real 0m43.301s 00:19:22.444 user 1m5.284s 00:19:22.444 sys 0m10.090s 00:19:22.444 17:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.444 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:19:22.444 ************************************ 00:19:22.444 END TEST nvmf_lvs_grow 00:19:22.444 ************************************ 00:19:22.444 17:29:30 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:22.444 17:29:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:22.444 17:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:22.444 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:19:22.444 ************************************ 00:19:22.444 START TEST nvmf_bdev_io_wait 00:19:22.444 ************************************ 00:19:22.444 17:29:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:22.444 * Looking for test storage... 00:19:22.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.444 17:29:30 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.444 17:29:30 -- nvmf/common.sh@7 -- # uname -s 00:19:22.444 17:29:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.444 17:29:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.444 17:29:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.444 17:29:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.444 17:29:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.444 17:29:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.444 17:29:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.444 17:29:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.444 17:29:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.444 17:29:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.444 17:29:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.444 17:29:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.444 17:29:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.444 17:29:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.444 17:29:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.444 17:29:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.444 17:29:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.444 17:29:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.444 17:29:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.444 17:29:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.445 17:29:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.445 17:29:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.445 17:29:30 -- paths/export.sh@5 -- # export PATH 00:19:22.445 17:29:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.445 17:29:30 -- nvmf/common.sh@46 -- # : 0 00:19:22.445 17:29:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:22.445 17:29:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:22.445 17:29:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:22.445 17:29:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.445 17:29:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.445 17:29:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:22.445 17:29:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:22.445 17:29:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:22.445 17:29:30 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.445 17:29:30 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.445 17:29:30 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:22.445 17:29:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:22.445 17:29:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.445 17:29:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:22.445 17:29:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:22.445 17:29:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:22.445 17:29:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.445 17:29:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.445 17:29:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.445 17:29:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:22.445 17:29:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:22.445 17:29:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:22.445 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 17:29:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:29.036 17:29:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:29.036 17:29:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:29.036 17:29:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:29.036 17:29:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:29.036 17:29:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:29.036 17:29:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:29.036 17:29:37 -- nvmf/common.sh@294 -- # net_devs=() 00:19:29.036 17:29:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:29.036 17:29:37 -- nvmf/common.sh@295 -- # e810=() 00:19:29.036 17:29:37 -- nvmf/common.sh@295 -- # local -ga e810 00:19:29.036 17:29:37 -- nvmf/common.sh@296 -- # x722=() 00:19:29.036 17:29:37 -- nvmf/common.sh@296 -- # local -ga x722 00:19:29.036 17:29:37 -- nvmf/common.sh@297 -- # mlx=() 00:19:29.036 17:29:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:29.036 17:29:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.036 17:29:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:29.036 17:29:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:29.036 17:29:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:29.037 17:29:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:29.037 17:29:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:29.037 17:29:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:29.037 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:29.037 17:29:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:29.037 17:29:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:29.037 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:29.037 17:29:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:29.037 17:29:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:29.037 17:29:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.037 17:29:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:29.037 17:29:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.037 17:29:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:29.037 Found net devices under 0000:31:00.0: cvl_0_0 00:19:29.037 17:29:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.037 17:29:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:29.037 17:29:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.037 17:29:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:29.037 17:29:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.037 17:29:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:29.037 Found net devices under 0000:31:00.1: cvl_0_1 00:19:29.037 17:29:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.037 17:29:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:29.037 17:29:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:29.037 17:29:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:29.037 17:29:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:29.037 17:29:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.037 17:29:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.037 17:29:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.037 17:29:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:29.037 17:29:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.037 17:29:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.037 17:29:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:29.037 17:29:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.037 17:29:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.037 17:29:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:29.298 17:29:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:29.298 17:29:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.298 17:29:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.298 17:29:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.298 17:29:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.298 17:29:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:29.298 17:29:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.560 17:29:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.560 17:29:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.560 17:29:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:29.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:19:29.561 00:19:29.561 --- 10.0.0.2 ping statistics --- 00:19:29.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.561 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:19:29.561 17:29:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:19:29.561 00:19:29.561 --- 10.0.0.1 ping statistics --- 00:19:29.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.561 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:19:29.561 17:29:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.561 17:29:37 -- nvmf/common.sh@410 -- # return 0 00:19:29.561 17:29:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:29.561 17:29:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.561 17:29:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:29.561 17:29:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:29.561 17:29:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.561 17:29:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:29.561 17:29:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:29.561 17:29:37 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:29.561 17:29:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:29.561 17:29:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:29.561 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:19:29.561 17:29:37 -- nvmf/common.sh@469 -- # nvmfpid=3185517 00:19:29.561 17:29:37 -- nvmf/common.sh@470 -- # waitforlisten 3185517 00:19:29.561 17:29:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:29.561 17:29:37 -- common/autotest_common.sh@819 -- # '[' -z 3185517 ']' 00:19:29.561 17:29:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.561 17:29:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:29.561 17:29:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.562 17:29:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:29.562 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:19:29.562 [2024-10-13 17:29:37.980254] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:29.562 [2024-10-13 17:29:37.980322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.562 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.562 [2024-10-13 17:29:38.056510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.823 [2024-10-13 17:29:38.095705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:29.823 [2024-10-13 17:29:38.095859] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.823 [2024-10-13 17:29:38.095869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.823 [2024-10-13 17:29:38.095877] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.823 [2024-10-13 17:29:38.096034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.823 [2024-10-13 17:29:38.096185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.823 [2024-10-13 17:29:38.096500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.823 [2024-10-13 17:29:38.096502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.394 17:29:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:30.394 17:29:38 -- common/autotest_common.sh@852 -- # return 0 00:19:30.394 17:29:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:30.394 17:29:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:30.394 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.394 17:29:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.394 17:29:38 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:30.394 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.394 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.394 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.394 17:29:38 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:30.394 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.394 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.394 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.394 17:29:38 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.394 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.394 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.394 [2024-10-13 17:29:38.876724] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.394 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.394 17:29:38 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.394 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.394 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.394 Malloc0 00:19:30.394 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.394 17:29:38 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.394 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.394 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.655 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.655 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.655 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.655 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.655 17:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.655 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.655 [2024-10-13 17:29:38.946375] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.655 17:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3185644 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@30 -- # READ_PID=3185646 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:30.655 17:29:38 -- nvmf/common.sh@520 -- # config=() 00:19:30.655 17:29:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:30.655 17:29:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.655 17:29:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.655 { 00:19:30.655 "params": { 00:19:30.655 "name": "Nvme$subsystem", 00:19:30.655 "trtype": "$TEST_TRANSPORT", 00:19:30.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.655 "adrfam": "ipv4", 00:19:30.655 "trsvcid": "$NVMF_PORT", 00:19:30.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.655 "hdgst": ${hdgst:-false}, 00:19:30.655 "ddgst": ${ddgst:-false} 00:19:30.655 }, 00:19:30.655 "method": "bdev_nvme_attach_controller" 00:19:30.655 } 00:19:30.655 EOF 00:19:30.655 )") 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3185648 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:30.655 17:29:38 -- nvmf/common.sh@520 -- # config=() 00:19:30.655 17:29:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:30.655 17:29:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.655 17:29:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.655 { 00:19:30.655 "params": { 00:19:30.655 "name": "Nvme$subsystem", 00:19:30.655 "trtype": "$TEST_TRANSPORT", 00:19:30.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.655 "adrfam": "ipv4", 00:19:30.655 "trsvcid": "$NVMF_PORT", 00:19:30.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.655 "hdgst": ${hdgst:-false}, 00:19:30.655 "ddgst": ${ddgst:-false} 00:19:30.655 }, 00:19:30.655 "method": "bdev_nvme_attach_controller" 00:19:30.655 } 00:19:30.655 EOF 00:19:30.655 )") 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3185651 00:19:30.655 17:29:38 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:30.656 17:29:38 -- target/bdev_io_wait.sh@35 -- # sync 00:19:30.656 17:29:38 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:30.656 17:29:38 -- nvmf/common.sh@542 -- # cat 00:19:30.656 17:29:38 -- nvmf/common.sh@520 -- # config=() 00:19:30.656 17:29:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:30.656 17:29:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.656 17:29:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.656 { 00:19:30.656 "params": { 00:19:30.656 "name": "Nvme$subsystem", 00:19:30.656 "trtype": "$TEST_TRANSPORT", 00:19:30.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.656 "adrfam": "ipv4", 00:19:30.656 "trsvcid": "$NVMF_PORT", 00:19:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.656 "hdgst": ${hdgst:-false}, 00:19:30.656 "ddgst": ${ddgst:-false} 00:19:30.656 }, 00:19:30.656 "method": "bdev_nvme_attach_controller" 00:19:30.656 } 00:19:30.656 EOF 00:19:30.656 )") 00:19:30.656 17:29:38 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:30.656 17:29:38 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:30.656 17:29:38 -- nvmf/common.sh@520 -- # config=() 00:19:30.656 17:29:38 -- nvmf/common.sh@542 -- # cat 00:19:30.656 17:29:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:30.656 17:29:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.656 17:29:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.656 { 00:19:30.656 "params": { 00:19:30.656 "name": "Nvme$subsystem", 00:19:30.656 "trtype": "$TEST_TRANSPORT", 00:19:30.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.656 "adrfam": "ipv4", 00:19:30.656 "trsvcid": "$NVMF_PORT", 00:19:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.656 "hdgst": ${hdgst:-false}, 00:19:30.656 "ddgst": ${ddgst:-false} 00:19:30.656 }, 00:19:30.656 "method": "bdev_nvme_attach_controller" 00:19:30.656 } 00:19:30.656 EOF 00:19:30.656 )") 00:19:30.656 17:29:38 -- nvmf/common.sh@542 -- # cat 00:19:30.656 17:29:38 -- target/bdev_io_wait.sh@37 -- # wait 3185644 00:19:30.656 17:29:38 -- nvmf/common.sh@542 -- # cat 00:19:30.656 17:29:38 -- nvmf/common.sh@544 -- # jq . 00:19:30.656 17:29:38 -- nvmf/common.sh@544 -- # jq . 00:19:30.656 17:29:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:30.656 17:29:38 -- nvmf/common.sh@544 -- # jq . 00:19:30.656 17:29:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:30.656 "params": { 00:19:30.656 "name": "Nvme1", 00:19:30.656 "trtype": "tcp", 00:19:30.656 "traddr": "10.0.0.2", 00:19:30.656 "adrfam": "ipv4", 00:19:30.656 "trsvcid": "4420", 00:19:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.656 "hdgst": false, 00:19:30.656 "ddgst": false 00:19:30.656 }, 00:19:30.656 "method": "bdev_nvme_attach_controller" 00:19:30.656 }' 00:19:30.656 17:29:38 -- nvmf/common.sh@544 -- # jq . 00:19:30.656 17:29:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:30.656 17:29:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:30.656 "params": { 00:19:30.656 "name": "Nvme1", 00:19:30.656 "trtype": "tcp", 00:19:30.656 "traddr": "10.0.0.2", 00:19:30.656 "adrfam": "ipv4", 00:19:30.656 "trsvcid": "4420", 00:19:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.656 "hdgst": false, 00:19:30.656 "ddgst": false 00:19:30.656 }, 00:19:30.656 "method": "bdev_nvme_attach_controller" 00:19:30.656 }' 00:19:30.656 17:29:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:30.656 17:29:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:30.656 "params": { 00:19:30.656 "name": "Nvme1", 00:19:30.656 "trtype": "tcp", 00:19:30.656 "traddr": "10.0.0.2", 00:19:30.656 "adrfam": "ipv4", 00:19:30.656 "trsvcid": "4420", 00:19:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.656 "hdgst": false, 00:19:30.656 "ddgst": false 00:19:30.656 }, 00:19:30.656 "method": "bdev_nvme_attach_controller" 00:19:30.656 }' 00:19:30.656 17:29:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:30.656 17:29:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:30.656 "params": { 00:19:30.656 "name": "Nvme1", 00:19:30.656 "trtype": "tcp", 00:19:30.656 "traddr": "10.0.0.2", 00:19:30.656 "adrfam": "ipv4", 00:19:30.656 "trsvcid": "4420", 00:19:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.656 "hdgst": false, 00:19:30.656 "ddgst": false 00:19:30.656 }, 00:19:30.656 "method": "bdev_nvme_attach_controller" 00:19:30.656 }' 00:19:30.656 [2024-10-13 17:29:38.998642] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:30.656 [2024-10-13 17:29:38.998693] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:30.656 [2024-10-13 17:29:38.998985] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:30.656 [2024-10-13 17:29:38.999030] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:30.656 [2024-10-13 17:29:38.999895] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:30.656 [2024-10-13 17:29:38.999944] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:30.656 [2024-10-13 17:29:39.000861] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:30.656 [2024-10-13 17:29:39.000907] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:30.656 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.656 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.656 [2024-10-13 17:29:39.146736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.656 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.656 [2024-10-13 17:29:39.162718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:30.917 [2024-10-13 17:29:39.202871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.917 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.917 [2024-10-13 17:29:39.218779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:30.917 [2024-10-13 17:29:39.263804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.917 [2024-10-13 17:29:39.281692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:30.917 [2024-10-13 17:29:39.293057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.917 [2024-10-13 17:29:39.309197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:30.917 Running I/O for 1 seconds... 00:19:30.917 Running I/O for 1 seconds... 00:19:31.178 Running I/O for 1 seconds... 00:19:31.178 Running I/O for 1 seconds... 00:19:32.121 00:19:32.121 Latency(us) 00:19:32.121 [2024-10-13T15:29:40.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.121 [2024-10-13T15:29:40.645Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:32.121 Nvme1n1 : 1.00 190511.61 744.19 0.00 0.00 669.77 266.24 1146.88 00:19:32.121 [2024-10-13T15:29:40.645Z] =================================================================================================================== 00:19:32.121 [2024-10-13T15:29:40.645Z] Total : 190511.61 744.19 0.00 0.00 669.77 266.24 1146.88 00:19:32.121 00:19:32.121 Latency(us) 00:19:32.121 [2024-10-13T15:29:40.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.121 [2024-10-13T15:29:40.645Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:32.121 Nvme1n1 : 1.01 9322.78 36.42 0.00 0.00 13636.95 5543.25 24466.77 00:19:32.121 [2024-10-13T15:29:40.645Z] =================================================================================================================== 00:19:32.121 [2024-10-13T15:29:40.645Z] Total : 9322.78 36.42 0.00 0.00 13636.95 5543.25 24466.77 00:19:32.121 00:19:32.121 Latency(us) 00:19:32.121 [2024-10-13T15:29:40.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.121 [2024-10-13T15:29:40.645Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:32.121 Nvme1n1 : 1.00 18900.57 73.83 0.00 0.00 6755.94 3822.93 18240.85 00:19:32.121 [2024-10-13T15:29:40.645Z] =================================================================================================================== 00:19:32.121 [2024-10-13T15:29:40.645Z] Total : 18900.57 73.83 0.00 0.00 6755.94 3822.93 18240.85 00:19:32.121 00:19:32.121 Latency(us) 00:19:32.121 [2024-10-13T15:29:40.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.121 [2024-10-13T15:29:40.645Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:32.121 Nvme1n1 : 1.00 9018.97 35.23 0.00 0.00 14151.51 4751.36 37137.07 00:19:32.121 [2024-10-13T15:29:40.645Z] =================================================================================================================== 00:19:32.121 [2024-10-13T15:29:40.645Z] Total : 9018.97 35.23 0.00 0.00 14151.51 4751.36 37137.07 00:19:32.383 17:29:40 -- target/bdev_io_wait.sh@38 -- # wait 3185646 00:19:32.383 17:29:40 -- target/bdev_io_wait.sh@39 -- # wait 3185648 00:19:32.383 17:29:40 -- target/bdev_io_wait.sh@40 -- # wait 3185651 00:19:32.383 17:29:40 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.383 17:29:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.383 17:29:40 -- common/autotest_common.sh@10 -- # set +x 00:19:32.383 17:29:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.383 17:29:40 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:32.383 17:29:40 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:32.383 17:29:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.383 17:29:40 -- nvmf/common.sh@116 -- # sync 00:19:32.383 17:29:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.383 17:29:40 -- nvmf/common.sh@119 -- # set +e 00:19:32.383 17:29:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.383 17:29:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.383 rmmod nvme_tcp 00:19:32.383 rmmod nvme_fabrics 00:19:32.383 rmmod nvme_keyring 00:19:32.383 17:29:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.383 17:29:40 -- nvmf/common.sh@123 -- # set -e 00:19:32.383 17:29:40 -- nvmf/common.sh@124 -- # return 0 00:19:32.383 17:29:40 -- nvmf/common.sh@477 -- # '[' -n 3185517 ']' 00:19:32.383 17:29:40 -- nvmf/common.sh@478 -- # killprocess 3185517 00:19:32.383 17:29:40 -- common/autotest_common.sh@926 -- # '[' -z 3185517 ']' 00:19:32.383 17:29:40 -- common/autotest_common.sh@930 -- # kill -0 3185517 00:19:32.383 17:29:40 -- common/autotest_common.sh@931 -- # uname 00:19:32.383 17:29:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.383 17:29:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3185517 00:19:32.383 17:29:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:32.383 17:29:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:32.383 17:29:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3185517' 00:19:32.383 killing process with pid 3185517 00:19:32.383 17:29:40 -- common/autotest_common.sh@945 -- # kill 3185517 00:19:32.383 17:29:40 -- common/autotest_common.sh@950 -- # wait 3185517 00:19:32.644 17:29:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:32.644 17:29:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:32.644 17:29:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:32.644 17:29:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.644 17:29:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:32.644 17:29:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.644 17:29:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.644 17:29:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.556 17:29:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:34.556 00:19:34.556 real 0m12.548s 00:19:34.556 user 0m18.653s 00:19:34.556 sys 0m6.820s 00:19:34.556 17:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.556 17:29:43 -- common/autotest_common.sh@10 -- # set +x 00:19:34.556 ************************************ 00:19:34.556 END TEST nvmf_bdev_io_wait 00:19:34.556 ************************************ 00:19:34.817 17:29:43 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:34.817 17:29:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:34.817 17:29:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.817 17:29:43 -- common/autotest_common.sh@10 -- # set +x 00:19:34.817 ************************************ 00:19:34.817 START TEST nvmf_queue_depth 00:19:34.817 ************************************ 00:19:34.817 17:29:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:34.817 * Looking for test storage... 00:19:34.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.817 17:29:43 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.817 17:29:43 -- nvmf/common.sh@7 -- # uname -s 00:19:34.817 17:29:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.817 17:29:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.817 17:29:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.817 17:29:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.817 17:29:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.817 17:29:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.817 17:29:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.817 17:29:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.817 17:29:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.817 17:29:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.817 17:29:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.817 17:29:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.817 17:29:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.817 17:29:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.817 17:29:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.817 17:29:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.817 17:29:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.817 17:29:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.817 17:29:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.817 17:29:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.817 17:29:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.817 17:29:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.817 17:29:43 -- paths/export.sh@5 -- # export PATH 00:19:34.817 17:29:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.817 17:29:43 -- nvmf/common.sh@46 -- # : 0 00:19:34.817 17:29:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:34.817 17:29:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:34.817 17:29:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:34.817 17:29:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.817 17:29:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.817 17:29:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:34.817 17:29:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:34.817 17:29:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:34.817 17:29:43 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:34.817 17:29:43 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:34.817 17:29:43 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.817 17:29:43 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:34.817 17:29:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:34.817 17:29:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.817 17:29:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:34.817 17:29:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:34.817 17:29:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:34.817 17:29:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.817 17:29:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.817 17:29:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.817 17:29:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:34.817 17:29:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:34.817 17:29:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:34.817 17:29:43 -- common/autotest_common.sh@10 -- # set +x 00:19:42.956 17:29:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:42.956 17:29:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:42.956 17:29:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:42.956 17:29:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:42.956 17:29:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:42.956 17:29:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:42.956 17:29:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:42.956 17:29:50 -- nvmf/common.sh@294 -- # net_devs=() 00:19:42.956 17:29:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:42.956 17:29:50 -- nvmf/common.sh@295 -- # e810=() 00:19:42.956 17:29:50 -- nvmf/common.sh@295 -- # local -ga e810 00:19:42.956 17:29:50 -- nvmf/common.sh@296 -- # x722=() 00:19:42.956 17:29:50 -- nvmf/common.sh@296 -- # local -ga x722 00:19:42.956 17:29:50 -- nvmf/common.sh@297 -- # mlx=() 00:19:42.956 17:29:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:42.956 17:29:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.956 17:29:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.957 17:29:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:42.957 17:29:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:42.957 17:29:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:42.957 17:29:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.957 17:29:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:42.957 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:42.957 17:29:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.957 17:29:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:42.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:42.957 17:29:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:42.957 17:29:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.957 17:29:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.957 17:29:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.957 17:29:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.957 17:29:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:42.957 Found net devices under 0000:31:00.0: cvl_0_0 00:19:42.957 17:29:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.957 17:29:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.957 17:29:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.957 17:29:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.957 17:29:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.957 17:29:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:42.957 Found net devices under 0000:31:00.1: cvl_0_1 00:19:42.957 17:29:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.957 17:29:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:42.957 17:29:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:42.957 17:29:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:42.957 17:29:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.957 17:29:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.957 17:29:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.957 17:29:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:42.957 17:29:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.957 17:29:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.957 17:29:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:42.957 17:29:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.957 17:29:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.957 17:29:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:42.957 17:29:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:42.957 17:29:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.957 17:29:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.957 17:29:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.957 17:29:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.957 17:29:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:42.957 17:29:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.957 17:29:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.957 17:29:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.957 17:29:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:42.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:19:42.957 00:19:42.957 --- 10.0.0.2 ping statistics --- 00:19:42.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.957 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:19:42.957 17:29:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:19:42.957 00:19:42.957 --- 10.0.0.1 ping statistics --- 00:19:42.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.957 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:19:42.957 17:29:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.957 17:29:50 -- nvmf/common.sh@410 -- # return 0 00:19:42.957 17:29:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:42.957 17:29:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.957 17:29:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:42.957 17:29:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.957 17:29:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:42.957 17:29:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:42.957 17:29:50 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:42.957 17:29:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:42.957 17:29:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:42.957 17:29:50 -- common/autotest_common.sh@10 -- # set +x 00:19:42.957 17:29:50 -- nvmf/common.sh@469 -- # nvmfpid=3190403 00:19:42.957 17:29:50 -- nvmf/common.sh@470 -- # waitforlisten 3190403 00:19:42.957 17:29:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.957 17:29:50 -- common/autotest_common.sh@819 -- # '[' -z 3190403 ']' 00:19:42.957 17:29:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.957 17:29:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:42.957 17:29:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.957 17:29:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:42.957 17:29:50 -- common/autotest_common.sh@10 -- # set +x 00:19:42.957 [2024-10-13 17:29:50.738036] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:42.957 [2024-10-13 17:29:50.738091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.957 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.957 [2024-10-13 17:29:50.822466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.957 [2024-10-13 17:29:50.853909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:42.957 [2024-10-13 17:29:50.854036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.957 [2024-10-13 17:29:50.854045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.957 [2024-10-13 17:29:50.854053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.957 [2024-10-13 17:29:50.854080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.218 17:29:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.218 17:29:51 -- common/autotest_common.sh@852 -- # return 0 00:19:43.218 17:29:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.218 17:29:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 17:29:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.218 17:29:51 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.218 17:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 [2024-10-13 17:29:51.580176] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.218 17:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.218 17:29:51 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:43.218 17:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 Malloc0 00:19:43.218 17:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.218 17:29:51 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.218 17:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 17:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.218 17:29:51 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.218 17:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 17:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.218 17:29:51 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.218 17:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 [2024-10-13 17:29:51.646630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.218 17:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.218 17:29:51 -- target/queue_depth.sh@30 -- # bdevperf_pid=3190461 00:19:43.218 17:29:51 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.218 17:29:51 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:43.218 17:29:51 -- target/queue_depth.sh@33 -- # waitforlisten 3190461 /var/tmp/bdevperf.sock 00:19:43.218 17:29:51 -- common/autotest_common.sh@819 -- # '[' -z 3190461 ']' 00:19:43.218 17:29:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.218 17:29:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.218 17:29:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.218 17:29:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.218 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:19:43.218 [2024-10-13 17:29:51.699603] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:43.218 [2024-10-13 17:29:51.699670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190461 ] 00:19:43.218 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.479 [2024-10-13 17:29:51.768712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.479 [2024-10-13 17:29:51.805049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.050 17:29:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.050 17:29:52 -- common/autotest_common.sh@852 -- # return 0 00:19:44.050 17:29:52 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:44.050 17:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.050 17:29:52 -- common/autotest_common.sh@10 -- # set +x 00:19:44.310 NVMe0n1 00:19:44.310 17:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.310 17:29:52 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.310 Running I/O for 10 seconds... 00:19:56.541 00:19:56.541 Latency(us) 00:19:56.541 [2024-10-13T15:30:05.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.541 [2024-10-13T15:30:05.065Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:56.541 Verification LBA range: start 0x0 length 0x4000 00:19:56.541 NVMe0n1 : 10.08 18907.79 73.86 0.00 0.00 53786.39 14308.69 54394.88 00:19:56.541 [2024-10-13T15:30:05.065Z] =================================================================================================================== 00:19:56.541 [2024-10-13T15:30:05.065Z] Total : 18907.79 73.86 0.00 0.00 53786.39 14308.69 54394.88 00:19:56.541 0 00:19:56.541 17:30:02 -- target/queue_depth.sh@39 -- # killprocess 3190461 00:19:56.541 17:30:02 -- common/autotest_common.sh@926 -- # '[' -z 3190461 ']' 00:19:56.541 17:30:02 -- common/autotest_common.sh@930 -- # kill -0 3190461 00:19:56.541 17:30:02 -- common/autotest_common.sh@931 -- # uname 00:19:56.541 17:30:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:56.541 17:30:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3190461 00:19:56.541 17:30:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:56.541 17:30:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:56.541 17:30:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3190461' 00:19:56.541 killing process with pid 3190461 00:19:56.541 17:30:02 -- common/autotest_common.sh@945 -- # kill 3190461 00:19:56.541 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.541 00:19:56.541 Latency(us) 00:19:56.541 [2024-10-13T15:30:05.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.541 [2024-10-13T15:30:05.065Z] =================================================================================================================== 00:19:56.541 [2024-10-13T15:30:05.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.541 17:30:02 -- common/autotest_common.sh@950 -- # wait 3190461 00:19:56.541 17:30:03 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:56.541 17:30:03 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:56.541 17:30:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:56.541 17:30:03 -- nvmf/common.sh@116 -- # sync 00:19:56.541 17:30:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:56.541 17:30:03 -- nvmf/common.sh@119 -- # set +e 00:19:56.541 17:30:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:56.541 17:30:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:56.541 rmmod nvme_tcp 00:19:56.541 rmmod nvme_fabrics 00:19:56.541 rmmod nvme_keyring 00:19:56.541 17:30:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:56.541 17:30:03 -- nvmf/common.sh@123 -- # set -e 00:19:56.541 17:30:03 -- nvmf/common.sh@124 -- # return 0 00:19:56.541 17:30:03 -- nvmf/common.sh@477 -- # '[' -n 3190403 ']' 00:19:56.541 17:30:03 -- nvmf/common.sh@478 -- # killprocess 3190403 00:19:56.541 17:30:03 -- common/autotest_common.sh@926 -- # '[' -z 3190403 ']' 00:19:56.541 17:30:03 -- common/autotest_common.sh@930 -- # kill -0 3190403 00:19:56.541 17:30:03 -- common/autotest_common.sh@931 -- # uname 00:19:56.541 17:30:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:56.541 17:30:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3190403 00:19:56.541 17:30:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:56.541 17:30:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:56.541 17:30:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3190403' 00:19:56.541 killing process with pid 3190403 00:19:56.541 17:30:03 -- common/autotest_common.sh@945 -- # kill 3190403 00:19:56.541 17:30:03 -- common/autotest_common.sh@950 -- # wait 3190403 00:19:56.541 17:30:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.541 17:30:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.541 17:30:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.541 17:30:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.541 17:30:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.541 17:30:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.541 17:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.541 17:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.114 17:30:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:57.114 00:19:57.114 real 0m22.328s 00:19:57.114 user 0m25.788s 00:19:57.114 sys 0m6.781s 00:19:57.114 17:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.114 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:57.114 ************************************ 00:19:57.114 END TEST nvmf_queue_depth 00:19:57.114 ************************************ 00:19:57.114 17:30:05 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:57.114 17:30:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:57.114 17:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:57.114 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:57.114 ************************************ 00:19:57.114 START TEST nvmf_multipath 00:19:57.114 ************************************ 00:19:57.114 17:30:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:57.114 * Looking for test storage... 00:19:57.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.114 17:30:05 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.114 17:30:05 -- nvmf/common.sh@7 -- # uname -s 00:19:57.114 17:30:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.114 17:30:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.114 17:30:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.114 17:30:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.114 17:30:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.114 17:30:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.114 17:30:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.114 17:30:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.114 17:30:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.114 17:30:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.114 17:30:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.114 17:30:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.114 17:30:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.114 17:30:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.114 17:30:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.114 17:30:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.114 17:30:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.114 17:30:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.114 17:30:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.114 17:30:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.114 17:30:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.114 17:30:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.114 17:30:05 -- paths/export.sh@5 -- # export PATH 00:19:57.114 17:30:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.114 17:30:05 -- nvmf/common.sh@46 -- # : 0 00:19:57.114 17:30:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:57.114 17:30:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:57.114 17:30:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:57.114 17:30:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.114 17:30:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.114 17:30:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:57.114 17:30:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:57.114 17:30:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:57.114 17:30:05 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.114 17:30:05 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.114 17:30:05 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:57.114 17:30:05 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.114 17:30:05 -- target/multipath.sh@43 -- # nvmftestinit 00:19:57.114 17:30:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:57.114 17:30:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.114 17:30:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:57.114 17:30:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:57.114 17:30:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:57.114 17:30:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.114 17:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.114 17:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.114 17:30:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:57.114 17:30:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:57.114 17:30:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:57.114 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:20:05.252 17:30:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:05.252 17:30:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:05.252 17:30:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:05.252 17:30:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:05.252 17:30:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:05.252 17:30:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:05.252 17:30:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:05.252 17:30:12 -- nvmf/common.sh@294 -- # net_devs=() 00:20:05.252 17:30:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:05.252 17:30:12 -- nvmf/common.sh@295 -- # e810=() 00:20:05.252 17:30:12 -- nvmf/common.sh@295 -- # local -ga e810 00:20:05.252 17:30:12 -- nvmf/common.sh@296 -- # x722=() 00:20:05.252 17:30:12 -- nvmf/common.sh@296 -- # local -ga x722 00:20:05.252 17:30:12 -- nvmf/common.sh@297 -- # mlx=() 00:20:05.252 17:30:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:05.252 17:30:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.252 17:30:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:05.252 17:30:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:05.252 17:30:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:05.252 17:30:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:05.252 17:30:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:05.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:05.252 17:30:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:05.252 17:30:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:05.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:05.252 17:30:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.252 17:30:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:05.253 17:30:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:05.253 17:30:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:05.253 17:30:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:05.253 17:30:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:05.253 17:30:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.253 17:30:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:05.253 17:30:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.253 17:30:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:05.253 Found net devices under 0000:31:00.0: cvl_0_0 00:20:05.253 17:30:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.253 17:30:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:05.253 17:30:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.253 17:30:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:05.253 17:30:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.253 17:30:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:05.253 Found net devices under 0000:31:00.1: cvl_0_1 00:20:05.253 17:30:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.253 17:30:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:05.253 17:30:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:05.253 17:30:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:05.253 17:30:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:05.253 17:30:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:05.253 17:30:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.253 17:30:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.253 17:30:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.253 17:30:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:05.253 17:30:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.253 17:30:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.253 17:30:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:05.253 17:30:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.253 17:30:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.253 17:30:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:05.253 17:30:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:05.253 17:30:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.253 17:30:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.253 17:30:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.253 17:30:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.253 17:30:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:05.253 17:30:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.253 17:30:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.253 17:30:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.253 17:30:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:05.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:20:05.253 00:20:05.253 --- 10.0.0.2 ping statistics --- 00:20:05.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.253 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:20:05.253 17:30:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:20:05.253 00:20:05.253 --- 10.0.0.1 ping statistics --- 00:20:05.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.253 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:20:05.253 17:30:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.253 17:30:13 -- nvmf/common.sh@410 -- # return 0 00:20:05.253 17:30:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:05.253 17:30:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.253 17:30:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:05.253 17:30:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:05.253 17:30:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.253 17:30:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:05.253 17:30:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:05.253 17:30:13 -- target/multipath.sh@45 -- # '[' -z ']' 00:20:05.253 17:30:13 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:20:05.253 only one NIC for nvmf test 00:20:05.253 17:30:13 -- target/multipath.sh@47 -- # nvmftestfini 00:20:05.253 17:30:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:05.253 17:30:13 -- nvmf/common.sh@116 -- # sync 00:20:05.253 17:30:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:05.253 17:30:13 -- nvmf/common.sh@119 -- # set +e 00:20:05.253 17:30:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:05.253 17:30:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:05.253 rmmod nvme_tcp 00:20:05.253 rmmod nvme_fabrics 00:20:05.253 rmmod nvme_keyring 00:20:05.253 17:30:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:05.253 17:30:13 -- nvmf/common.sh@123 -- # set -e 00:20:05.253 17:30:13 -- nvmf/common.sh@124 -- # return 0 00:20:05.253 17:30:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:05.253 17:30:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:05.253 17:30:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:05.253 17:30:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:05.253 17:30:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.253 17:30:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:05.253 17:30:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.253 17:30:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.253 17:30:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.162 17:30:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:07.162 17:30:15 -- target/multipath.sh@48 -- # exit 0 00:20:07.162 17:30:15 -- target/multipath.sh@1 -- # nvmftestfini 00:20:07.162 17:30:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:07.162 17:30:15 -- nvmf/common.sh@116 -- # sync 00:20:07.162 17:30:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:07.162 17:30:15 -- nvmf/common.sh@119 -- # set +e 00:20:07.162 17:30:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:07.162 17:30:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:07.162 17:30:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:07.162 17:30:15 -- nvmf/common.sh@123 -- # set -e 00:20:07.162 17:30:15 -- nvmf/common.sh@124 -- # return 0 00:20:07.162 17:30:15 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:07.162 17:30:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:07.162 17:30:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:07.162 17:30:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:07.162 17:30:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.162 17:30:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:07.162 17:30:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.162 17:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.162 17:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.162 17:30:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:07.162 00:20:07.162 real 0m9.776s 00:20:07.162 user 0m2.088s 00:20:07.162 sys 0m5.587s 00:20:07.162 17:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.162 17:30:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.162 ************************************ 00:20:07.163 END TEST nvmf_multipath 00:20:07.163 ************************************ 00:20:07.163 17:30:15 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:07.163 17:30:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:07.163 17:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:07.163 17:30:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.163 ************************************ 00:20:07.163 START TEST nvmf_zcopy 00:20:07.163 ************************************ 00:20:07.163 17:30:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:07.163 * Looking for test storage... 00:20:07.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.163 17:30:15 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.163 17:30:15 -- nvmf/common.sh@7 -- # uname -s 00:20:07.163 17:30:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.163 17:30:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.163 17:30:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.163 17:30:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.163 17:30:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.163 17:30:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.163 17:30:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.163 17:30:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.163 17:30:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.163 17:30:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.163 17:30:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.163 17:30:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.163 17:30:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.163 17:30:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.163 17:30:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.163 17:30:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.163 17:30:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.163 17:30:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.163 17:30:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.163 17:30:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.163 17:30:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.163 17:30:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.163 17:30:15 -- paths/export.sh@5 -- # export PATH 00:20:07.163 17:30:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.163 17:30:15 -- nvmf/common.sh@46 -- # : 0 00:20:07.163 17:30:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:07.163 17:30:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:07.163 17:30:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:07.163 17:30:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.163 17:30:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.163 17:30:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:07.163 17:30:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:07.163 17:30:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:07.163 17:30:15 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:07.163 17:30:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:07.163 17:30:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.163 17:30:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:07.163 17:30:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:07.163 17:30:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:07.163 17:30:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.163 17:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.163 17:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.163 17:30:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:07.163 17:30:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:07.163 17:30:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:07.163 17:30:15 -- common/autotest_common.sh@10 -- # set +x 00:20:15.300 17:30:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:15.300 17:30:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:15.300 17:30:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:15.300 17:30:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:15.300 17:30:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:15.300 17:30:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:15.300 17:30:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:15.300 17:30:22 -- nvmf/common.sh@294 -- # net_devs=() 00:20:15.300 17:30:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:15.300 17:30:22 -- nvmf/common.sh@295 -- # e810=() 00:20:15.300 17:30:22 -- nvmf/common.sh@295 -- # local -ga e810 00:20:15.300 17:30:22 -- nvmf/common.sh@296 -- # x722=() 00:20:15.300 17:30:22 -- nvmf/common.sh@296 -- # local -ga x722 00:20:15.300 17:30:22 -- nvmf/common.sh@297 -- # mlx=() 00:20:15.300 17:30:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:15.300 17:30:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.300 17:30:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:15.300 17:30:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:15.300 17:30:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:15.300 17:30:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.300 17:30:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:15.300 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:15.300 17:30:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.300 17:30:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:15.300 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:15.300 17:30:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:15.300 17:30:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.300 17:30:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.300 17:30:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.300 17:30:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.300 17:30:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:15.300 Found net devices under 0000:31:00.0: cvl_0_0 00:20:15.300 17:30:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.300 17:30:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.300 17:30:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.300 17:30:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.300 17:30:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.300 17:30:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:15.300 Found net devices under 0000:31:00.1: cvl_0_1 00:20:15.300 17:30:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.300 17:30:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:15.300 17:30:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:15.300 17:30:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:15.300 17:30:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:15.300 17:30:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.300 17:30:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.300 17:30:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.300 17:30:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:15.300 17:30:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.300 17:30:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.300 17:30:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:15.300 17:30:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.300 17:30:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.300 17:30:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:15.300 17:30:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:15.300 17:30:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.300 17:30:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.300 17:30:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.300 17:30:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.300 17:30:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:15.300 17:30:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.300 17:30:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.300 17:30:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.300 17:30:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:15.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:20:15.300 00:20:15.300 --- 10.0.0.2 ping statistics --- 00:20:15.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.300 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:20:15.300 17:30:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:20:15.301 00:20:15.301 --- 10.0.0.1 ping statistics --- 00:20:15.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.301 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:20:15.301 17:30:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.301 17:30:22 -- nvmf/common.sh@410 -- # return 0 00:20:15.301 17:30:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.301 17:30:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.301 17:30:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:15.301 17:30:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:15.301 17:30:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.301 17:30:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:15.301 17:30:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:15.301 17:30:22 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:15.301 17:30:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:15.301 17:30:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:15.301 17:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:15.301 17:30:22 -- nvmf/common.sh@469 -- # nvmfpid=3201813 00:20:15.301 17:30:22 -- nvmf/common.sh@470 -- # waitforlisten 3201813 00:20:15.301 17:30:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:15.301 17:30:22 -- common/autotest_common.sh@819 -- # '[' -z 3201813 ']' 00:20:15.301 17:30:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.301 17:30:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.301 17:30:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.301 17:30:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.301 17:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:15.301 [2024-10-13 17:30:22.969597] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:15.301 [2024-10-13 17:30:22.969660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.301 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.301 [2024-10-13 17:30:23.061633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.301 [2024-10-13 17:30:23.105982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:15.301 [2024-10-13 17:30:23.106147] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.301 [2024-10-13 17:30:23.106158] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.301 [2024-10-13 17:30:23.106173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.301 [2024-10-13 17:30:23.106195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.301 17:30:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:15.301 17:30:23 -- common/autotest_common.sh@852 -- # return 0 00:20:15.301 17:30:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:15.301 17:30:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:15.301 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.301 17:30:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.301 17:30:23 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:15.301 17:30:23 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:15.301 17:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.301 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.301 [2024-10-13 17:30:23.806132] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.301 17:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.301 17:30:23 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:15.301 17:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.301 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.301 17:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.301 17:30:23 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.301 17:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.301 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.561 [2024-10-13 17:30:23.830384] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.561 17:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.561 17:30:23 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:15.562 17:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.562 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.562 17:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.562 17:30:23 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:15.562 17:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.562 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.562 malloc0 00:20:15.562 17:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.562 17:30:23 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.562 17:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.562 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.562 17:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.562 17:30:23 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:15.562 17:30:23 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:15.562 17:30:23 -- nvmf/common.sh@520 -- # config=() 00:20:15.562 17:30:23 -- nvmf/common.sh@520 -- # local subsystem config 00:20:15.562 17:30:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:15.562 17:30:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:15.562 { 00:20:15.562 "params": { 00:20:15.562 "name": "Nvme$subsystem", 00:20:15.562 "trtype": "$TEST_TRANSPORT", 00:20:15.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.562 "adrfam": "ipv4", 00:20:15.562 "trsvcid": "$NVMF_PORT", 00:20:15.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.562 "hdgst": ${hdgst:-false}, 00:20:15.562 "ddgst": ${ddgst:-false} 00:20:15.562 }, 00:20:15.562 "method": "bdev_nvme_attach_controller" 00:20:15.562 } 00:20:15.562 EOF 00:20:15.562 )") 00:20:15.562 17:30:23 -- nvmf/common.sh@542 -- # cat 00:20:15.562 17:30:23 -- nvmf/common.sh@544 -- # jq . 00:20:15.562 17:30:23 -- nvmf/common.sh@545 -- # IFS=, 00:20:15.562 17:30:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:15.562 "params": { 00:20:15.562 "name": "Nvme1", 00:20:15.562 "trtype": "tcp", 00:20:15.562 "traddr": "10.0.0.2", 00:20:15.562 "adrfam": "ipv4", 00:20:15.562 "trsvcid": "4420", 00:20:15.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.562 "hdgst": false, 00:20:15.562 "ddgst": false 00:20:15.562 }, 00:20:15.562 "method": "bdev_nvme_attach_controller" 00:20:15.562 }' 00:20:15.562 [2024-10-13 17:30:23.927577] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:15.562 [2024-10-13 17:30:23.927641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201986 ] 00:20:15.562 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.562 [2024-10-13 17:30:23.995040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.562 [2024-10-13 17:30:24.031707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.822 Running I/O for 10 seconds... 00:20:25.908 00:20:25.908 Latency(us) 00:20:25.908 [2024-10-13T15:30:34.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.908 [2024-10-13T15:30:34.432Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:25.908 Verification LBA range: start 0x0 length 0x1000 00:20:25.908 Nvme1n1 : 10.01 13782.49 107.68 0.00 0.00 9259.41 1078.61 22282.24 00:20:25.908 [2024-10-13T15:30:34.432Z] =================================================================================================================== 00:20:25.908 [2024-10-13T15:30:34.432Z] Total : 13782.49 107.68 0.00 0.00 9259.41 1078.61 22282.24 00:20:26.217 17:30:34 -- target/zcopy.sh@39 -- # perfpid=3204105 00:20:26.217 17:30:34 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:26.217 17:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.217 17:30:34 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:26.217 17:30:34 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:26.217 17:30:34 -- nvmf/common.sh@520 -- # config=() 00:20:26.217 17:30:34 -- nvmf/common.sh@520 -- # local subsystem config 00:20:26.217 17:30:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:26.217 17:30:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:26.217 { 00:20:26.217 "params": { 00:20:26.217 "name": "Nvme$subsystem", 00:20:26.217 "trtype": "$TEST_TRANSPORT", 00:20:26.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.217 "adrfam": "ipv4", 00:20:26.217 "trsvcid": "$NVMF_PORT", 00:20:26.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.217 "hdgst": ${hdgst:-false}, 00:20:26.217 "ddgst": ${ddgst:-false} 00:20:26.217 }, 00:20:26.217 "method": "bdev_nvme_attach_controller" 00:20:26.217 } 00:20:26.217 EOF 00:20:26.217 )") 00:20:26.217 [2024-10-13 17:30:34.482727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.217 [2024-10-13 17:30:34.482755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.217 17:30:34 -- nvmf/common.sh@542 -- # cat 00:20:26.217 17:30:34 -- nvmf/common.sh@544 -- # jq . 00:20:26.217 17:30:34 -- nvmf/common.sh@545 -- # IFS=, 00:20:26.217 17:30:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:26.217 "params": { 00:20:26.217 "name": "Nvme1", 00:20:26.217 "trtype": "tcp", 00:20:26.217 "traddr": "10.0.0.2", 00:20:26.217 "adrfam": "ipv4", 00:20:26.217 "trsvcid": "4420", 00:20:26.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.217 "hdgst": false, 00:20:26.217 "ddgst": false 00:20:26.217 }, 00:20:26.217 "method": "bdev_nvme_attach_controller" 00:20:26.217 }' 00:20:26.217 [2024-10-13 17:30:34.494731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.217 [2024-10-13 17:30:34.494741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.217 [2024-10-13 17:30:34.506760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.217 [2024-10-13 17:30:34.506770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.217 [2024-10-13 17:30:34.518793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.217 [2024-10-13 17:30:34.518803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.217 [2024-10-13 17:30:34.523760] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:26.217 [2024-10-13 17:30:34.523809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204105 ] 00:20:26.217 [2024-10-13 17:30:34.530822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.530836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.542855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.542863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.218 [2024-10-13 17:30:34.554883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.554892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.566915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.566924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.578948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.578957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.584797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.218 [2024-10-13 17:30:34.590979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.590988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.603012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.603025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.613405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.218 [2024-10-13 17:30:34.615041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.615052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.627083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.627094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.639116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.639128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.651135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.651145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.663165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.663175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.675234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.675249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.687238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.687248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.699273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.699283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.711329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.711339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.218 [2024-10-13 17:30:34.723354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.218 [2024-10-13 17:30:34.723362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.735384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.735392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.747417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.747424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.759451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.759460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.771483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.771490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.783518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.783526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.795555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.795563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.807582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.807591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.819614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.819622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.831646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.831654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.843678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.843687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.856235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.856248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.867743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.867752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 Running I/O for 5 seconds... 00:20:26.478 [2024-10-13 17:30:34.882188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.882204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.895440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.895456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.908433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.908449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.921589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.921604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.934495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.934511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.947741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.947757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.960607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.960622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.973367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.973382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.986324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.986339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.478 [2024-10-13 17:30:34.998370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.478 [2024-10-13 17:30:34.998384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.011898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.011913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.025239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.025254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.037550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.037565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.050754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.050769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.063645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.063660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.076629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.076644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.089886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.089901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.102326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.102341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.115318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.115333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.128525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.128540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.141227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.141241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.154566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.154582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.168003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.168019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.180775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.180790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.194320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.194335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.207568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.207583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.220529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.220547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.233237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.233253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.246233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.246248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.740 [2024-10-13 17:30:35.259432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.740 [2024-10-13 17:30:35.259447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.272317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.272332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.285032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.285046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.298026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.298041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.310888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.310903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.323919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.323935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.336779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.336794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.349448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.349463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.362340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.362355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.375155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.375170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.388429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.388443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.401431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.401446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.414669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.414684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.426932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.426947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.001 [2024-10-13 17:30:35.439887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.001 [2024-10-13 17:30:35.439901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.002 [2024-10-13 17:30:35.452491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.002 [2024-10-13 17:30:35.452507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.002 [2024-10-13 17:30:35.465434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.002 [2024-10-13 17:30:35.465453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.002 [2024-10-13 17:30:35.478341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.002 [2024-10-13 17:30:35.478356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.002 [2024-10-13 17:30:35.491387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.002 [2024-10-13 17:30:35.491401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.002 [2024-10-13 17:30:35.504083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.002 [2024-10-13 17:30:35.504097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.002 [2024-10-13 17:30:35.516747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.002 [2024-10-13 17:30:35.516762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.262 [2024-10-13 17:30:35.529912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.262 [2024-10-13 17:30:35.529927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.262 [2024-10-13 17:30:35.542943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.262 [2024-10-13 17:30:35.542959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.262 [2024-10-13 17:30:35.556165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.262 [2024-10-13 17:30:35.556181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.262 [2024-10-13 17:30:35.569049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.262 [2024-10-13 17:30:35.569069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.581546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.581561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.594589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.594604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.607776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.607791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.619471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.619486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.633276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.633290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.646459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.646473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.659919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.659934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.672723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.672739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.685669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.685684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.698707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.698722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.711107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.711126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.724177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.724193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.737057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.737077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.750154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.750170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.762307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.762322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.263 [2024-10-13 17:30:35.775814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.263 [2024-10-13 17:30:35.775830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.788593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.788608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.801581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.801597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.814419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.814434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.827285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.827300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.840487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.840502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.853537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.853552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.866699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.866714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.879860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.879876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.892569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.892584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.905053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.905074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.917753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.917769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.930982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.930997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.943719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.943735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.956754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.524 [2024-10-13 17:30:35.956774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.524 [2024-10-13 17:30:35.969565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:35.969580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.525 [2024-10-13 17:30:35.982921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:35.982936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.525 [2024-10-13 17:30:35.996072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:35.996087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.525 [2024-10-13 17:30:36.009004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:36.009019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.525 [2024-10-13 17:30:36.021828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:36.021843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.525 [2024-10-13 17:30:36.034968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:36.034984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.525 [2024-10-13 17:30:36.048070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.525 [2024-10-13 17:30:36.048085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.061061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.061081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.073931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.073947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.086353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.086368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.099390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.099405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.112242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.112258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.125253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.125268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.138444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.138460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.151330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.151346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.164122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.164138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.177149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.785 [2024-10-13 17:30:36.177165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.785 [2024-10-13 17:30:36.190362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.190376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.203495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.203514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.216386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.216402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.229380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.229395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.242478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.242493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.255276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.255291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.268543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.268558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.281442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.281457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.294160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.294175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.786 [2024-10-13 17:30:36.307288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:27.786 [2024-10-13 17:30:36.307303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.320165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.320180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.333385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.333400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.346264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.346281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.358753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.358768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.371327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.371342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.384316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.384331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.397674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.397689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.410459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.410473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.423199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.423214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.435809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.435823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.448942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.448957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.461798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.461813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.474615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.474629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.487979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.487994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.501093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.501108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.513767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.513782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.526801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.526817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.539618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.539633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.552286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.552301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.047 [2024-10-13 17:30:36.565707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.047 [2024-10-13 17:30:36.565722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.578744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.578760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.591682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.591696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.604544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.604558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.617674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.617689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.630558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.630573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.643486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.643501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.656429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.656444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.669154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.669168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.681749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.681764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.694650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.694665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.707812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.707827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.720750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.720766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.733694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.733709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.746434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.746449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.759423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.759437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.772075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.772090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.785005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.785020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.797403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.797418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.810314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.810329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.309 [2024-10-13 17:30:36.823509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.309 [2024-10-13 17:30:36.823523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.836604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.836619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.849789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.849804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.862093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.862108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.875100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.875115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.888083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.888098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.900974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.900989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.914221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.914236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.927200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.927215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.940171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.940186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.952927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.952941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.965831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.965845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.979109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.979123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:36.991880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:36.991896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.004799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.004814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.017540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.017555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.030623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.030638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.043339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.043354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.056138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.056152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.069028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.069045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.081631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.081646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.571 [2024-10-13 17:30:37.094711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.571 [2024-10-13 17:30:37.094726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.107484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.107500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.120560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.120575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.134045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.134061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.146537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.146552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.159713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.159728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.172837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.172855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.185574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.185589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.198379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.198394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.211218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.211232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.223824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.223839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.236615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.236630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.249941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.249956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.262418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.262432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.274831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.274846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.287471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.287486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.300266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.300281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.312955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.312970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.325610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.325626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.338358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.338373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.832 [2024-10-13 17:30:37.351566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:28.832 [2024-10-13 17:30:37.351581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.364693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.364708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.377811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.377827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.390915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.390931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.404167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.404182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.416790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.416809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.429971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.429987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.442843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.442858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.455287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.455302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.468345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.468361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.481430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.481445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.493546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.493560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.506715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.506729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.519726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.519742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.532795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.532810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.545710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.545726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.558520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.558535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.571445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.571461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.584466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.584482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.597560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.597576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.093 [2024-10-13 17:30:37.610404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.093 [2024-10-13 17:30:37.610419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.623181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.623197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.636155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.636171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.649328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.649343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.662370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.662390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.675698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.675714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.688047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.688068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.700485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.700501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.713492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.713507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.726296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.726312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.739389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.739405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.752371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.752386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.765355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.765370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.778497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.778513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.791663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.791678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.804881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.804897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.817646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.817662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.830739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.830754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.843353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.843369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.856238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.856254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.355 [2024-10-13 17:30:37.869090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.355 [2024-10-13 17:30:37.869105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.882141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.882157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.895385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.895400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.908567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.908589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.921401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.921416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.934301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.934316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.946933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.946948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.959764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.959780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.972731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.972746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.985099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.985114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:37.997108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:37.997123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.010205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.010221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.023324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.023340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.035940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.035955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.048721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.048737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.061499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.061514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.074179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.074194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.086865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.086881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.099816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.099831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.112649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.112664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.125710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.125726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.615 [2024-10-13 17:30:38.138890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.615 [2024-10-13 17:30:38.138905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.152034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.152054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.165129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.165144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.178066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.178081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.191169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.191183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.203838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.203853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.216687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.216702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.228657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.228672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.241497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.241512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.254519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.254535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.267402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.267417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.280325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.280340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.293369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.293384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.306428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.306443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.319410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.319425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.332119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.332134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.345331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.345346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.358528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.358544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.371824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.371839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.384814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.384829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.876 [2024-10-13 17:30:38.397891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:29.876 [2024-10-13 17:30:38.397906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.411084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.411099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.423959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.423974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.436865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.436879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.449900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.449915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.462637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.462651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.475297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.475312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.488458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.488473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.501298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.501313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.514410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.514425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.527603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.527618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.540884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.540898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.553524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.553539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.566423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.566438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.579451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.579466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.591880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.591894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.604980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.604995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.617748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.617763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.630591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.630607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.643569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.643585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.138 [2024-10-13 17:30:38.656310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.138 [2024-10-13 17:30:38.656325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.669244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.669260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.682245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.682260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.694884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.694899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.708248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.708263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.721224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.721240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.734058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.734078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.746844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.746859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.759734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.759749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.772862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.772877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.785984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.785999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.799058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.799078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.812278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.812294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.825389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.825405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.838139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.838155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.399 [2024-10-13 17:30:38.851038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.399 [2024-10-13 17:30:38.851053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.400 [2024-10-13 17:30:38.863989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.400 [2024-10-13 17:30:38.864004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.400 [2024-10-13 17:30:38.877187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.400 [2024-10-13 17:30:38.877202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.400 [2024-10-13 17:30:38.890302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.400 [2024-10-13 17:30:38.890316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.400 [2024-10-13 17:30:38.903611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.400 [2024-10-13 17:30:38.903626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.400 [2024-10-13 17:30:38.916747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.400 [2024-10-13 17:30:38.916762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.660 [2024-10-13 17:30:38.930044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.660 [2024-10-13 17:30:38.930059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.660 [2024-10-13 17:30:38.942827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:38.942842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:38.955545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:38.955560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:38.968493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:38.968507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:38.981531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:38.981545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:38.994214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:38.994230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.007383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.007398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.020136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.020151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.033333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.033347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.045687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.045702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.058723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.058738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.071988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.072003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.084525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.084540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.097750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.097765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.110729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.110746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.123237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.123256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.135902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.135917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.149101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.149117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.162088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.162104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.661 [2024-10-13 17:30:39.175255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.661 [2024-10-13 17:30:39.175270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.921 [2024-10-13 17:30:39.188106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.188122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.200699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.200715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.213617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.213633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.226764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.226779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.239861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.239876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.252584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.252599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.265532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.265547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.278554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.278569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.291616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.291631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.304596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.304612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.317095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.317109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.329895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.329911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.342954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.342970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.356071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.356089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.369175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.369195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.381816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.381831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.394882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.394897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.407687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.407702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.419929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.419944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.432839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.432854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:30.922 [2024-10-13 17:30:39.445827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:30.922 [2024-10-13 17:30:39.445843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.182 [2024-10-13 17:30:39.458181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.182 [2024-10-13 17:30:39.458196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.182 [2024-10-13 17:30:39.470572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.182 [2024-10-13 17:30:39.470587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.483522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.483537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.496065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.496081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.508804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.508819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.521665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.521680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.534544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.534559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.547556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.547571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.560528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.560543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.573434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.573449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.586569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.586585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.599119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.599134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.612084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.612103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.625114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.625129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.637832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.637847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.650756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.650772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.663568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.663584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.676495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.676510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.689548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.689563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.183 [2024-10-13 17:30:39.702837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.183 [2024-10-13 17:30:39.702852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.715783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.715799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.729163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.729179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.742084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.742099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.754938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.754954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.767539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.767556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.780416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.780431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.793426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.793441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.806243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.806258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.818898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.818913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.832122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.832137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.845051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.845070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.857737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.857756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.443 [2024-10-13 17:30:39.870643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.443 [2024-10-13 17:30:39.870657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.883388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.883404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 00:20:31.444 Latency(us) 00:20:31.444 [2024-10-13T15:30:39.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.444 [2024-10-13T15:30:39.968Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:31.444 Nvme1n1 : 5.01 20091.42 156.96 0.00 0.00 6364.56 2621.44 14527.15 00:20:31.444 [2024-10-13T15:30:39.968Z] =================================================================================================================== 00:20:31.444 [2024-10-13T15:30:39.968Z] Total : 20091.42 156.96 0.00 0.00 6364.56 2621.44 14527.15 00:20:31.444 [2024-10-13 17:30:39.892871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.892885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.904901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.904914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.916937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.916950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.928964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.928976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.940994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.941004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.953020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.953030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.444 [2024-10-13 17:30:39.965050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.444 [2024-10-13 17:30:39.965059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.704 [2024-10-13 17:30:39.977112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.704 [2024-10-13 17:30:39.977125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.704 [2024-10-13 17:30:39.989113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.704 [2024-10-13 17:30:39.989124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.704 [2024-10-13 17:30:40.001141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:31.704 [2024-10-13 17:30:40.001148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:31.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3204105) - No such process 00:20:31.704 17:30:40 -- target/zcopy.sh@49 -- # wait 3204105 00:20:31.704 17:30:40 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:31.704 17:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.704 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.704 17:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.704 17:30:40 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:31.704 17:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.704 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.704 delay0 00:20:31.704 17:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.704 17:30:40 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:31.704 17:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.704 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.704 17:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.704 17:30:40 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:31.704 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.704 [2024-10-13 17:30:40.188246] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:39.855 [2024-10-13 17:30:46.970286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820e0 is same with the state(5) to be set 00:20:39.855 [2024-10-13 17:30:46.970318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820e0 is same with the state(5) to be set 00:20:39.855 Initializing NVMe Controllers 00:20:39.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.855 Initialization complete. Launching workers. 00:20:39.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 282, failed: 21522 00:20:39.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21720, failed to submit 84 00:20:39.855 success 21612, unsuccess 108, failed 0 00:20:39.855 17:30:46 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:39.855 17:30:46 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:39.855 17:30:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:39.855 17:30:46 -- nvmf/common.sh@116 -- # sync 00:20:39.855 17:30:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:39.855 17:30:46 -- nvmf/common.sh@119 -- # set +e 00:20:39.855 17:30:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:39.855 17:30:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:39.855 rmmod nvme_tcp 00:20:39.855 rmmod nvme_fabrics 00:20:39.855 rmmod nvme_keyring 00:20:39.855 17:30:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:39.855 17:30:47 -- nvmf/common.sh@123 -- # set -e 00:20:39.855 17:30:47 -- nvmf/common.sh@124 -- # return 0 00:20:39.855 17:30:47 -- nvmf/common.sh@477 -- # '[' -n 3201813 ']' 00:20:39.855 17:30:47 -- nvmf/common.sh@478 -- # killprocess 3201813 00:20:39.855 17:30:47 -- common/autotest_common.sh@926 -- # '[' -z 3201813 ']' 00:20:39.855 17:30:47 -- common/autotest_common.sh@930 -- # kill -0 3201813 00:20:39.855 17:30:47 -- common/autotest_common.sh@931 -- # uname 00:20:39.855 17:30:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.855 17:30:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3201813 00:20:39.856 17:30:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:39.856 17:30:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:39.856 17:30:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3201813' 00:20:39.856 killing process with pid 3201813 00:20:39.856 17:30:47 -- common/autotest_common.sh@945 -- # kill 3201813 00:20:39.856 17:30:47 -- common/autotest_common.sh@950 -- # wait 3201813 00:20:39.856 17:30:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:39.856 17:30:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:39.856 17:30:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:39.856 17:30:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.856 17:30:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:39.856 17:30:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.856 17:30:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.856 17:30:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.804 17:30:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:40.804 00:20:40.804 real 0m34.004s 00:20:40.804 user 0m44.967s 00:20:40.804 sys 0m11.331s 00:20:40.804 17:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.804 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.804 ************************************ 00:20:40.804 END TEST nvmf_zcopy 00:20:40.804 ************************************ 00:20:41.066 17:30:49 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:41.066 17:30:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:41.066 17:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:41.066 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 ************************************ 00:20:41.066 START TEST nvmf_nmic 00:20:41.066 ************************************ 00:20:41.066 17:30:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:41.066 * Looking for test storage... 00:20:41.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.066 17:30:49 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.066 17:30:49 -- nvmf/common.sh@7 -- # uname -s 00:20:41.066 17:30:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.066 17:30:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.066 17:30:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.066 17:30:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.066 17:30:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.066 17:30:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.066 17:30:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.066 17:30:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.066 17:30:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.066 17:30:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.066 17:30:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.066 17:30:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.066 17:30:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.066 17:30:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.066 17:30:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.066 17:30:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.066 17:30:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.066 17:30:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.066 17:30:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.066 17:30:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 17:30:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 17:30:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 17:30:49 -- paths/export.sh@5 -- # export PATH 00:20:41.066 17:30:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 17:30:49 -- nvmf/common.sh@46 -- # : 0 00:20:41.067 17:30:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:41.067 17:30:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:41.067 17:30:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:41.067 17:30:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.067 17:30:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.067 17:30:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:41.067 17:30:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:41.067 17:30:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:41.067 17:30:49 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.067 17:30:49 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.067 17:30:49 -- target/nmic.sh@14 -- # nvmftestinit 00:20:41.067 17:30:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:41.067 17:30:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.067 17:30:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:41.067 17:30:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:41.067 17:30:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:41.067 17:30:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.067 17:30:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.067 17:30:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.067 17:30:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:41.067 17:30:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:41.067 17:30:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:41.067 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:20:49.206 17:30:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:49.206 17:30:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:49.206 17:30:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:49.206 17:30:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:49.206 17:30:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:49.206 17:30:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:49.206 17:30:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:49.206 17:30:56 -- nvmf/common.sh@294 -- # net_devs=() 00:20:49.206 17:30:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:49.206 17:30:56 -- nvmf/common.sh@295 -- # e810=() 00:20:49.206 17:30:56 -- nvmf/common.sh@295 -- # local -ga e810 00:20:49.206 17:30:56 -- nvmf/common.sh@296 -- # x722=() 00:20:49.206 17:30:56 -- nvmf/common.sh@296 -- # local -ga x722 00:20:49.206 17:30:56 -- nvmf/common.sh@297 -- # mlx=() 00:20:49.206 17:30:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:49.206 17:30:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.206 17:30:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:49.206 17:30:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:49.206 17:30:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:49.206 17:30:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:49.206 17:30:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:49.206 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:49.206 17:30:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:49.206 17:30:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:49.206 17:30:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:49.206 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:49.207 17:30:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:49.207 17:30:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:49.207 17:30:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.207 17:30:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:49.207 17:30:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.207 17:30:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:49.207 Found net devices under 0000:31:00.0: cvl_0_0 00:20:49.207 17:30:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.207 17:30:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:49.207 17:30:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.207 17:30:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:49.207 17:30:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.207 17:30:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:49.207 Found net devices under 0000:31:00.1: cvl_0_1 00:20:49.207 17:30:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.207 17:30:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:49.207 17:30:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:49.207 17:30:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:49.207 17:30:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.207 17:30:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.207 17:30:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.207 17:30:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:49.207 17:30:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.207 17:30:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.207 17:30:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:49.207 17:30:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.207 17:30:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.207 17:30:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:49.207 17:30:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:49.207 17:30:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.207 17:30:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.207 17:30:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.207 17:30:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.207 17:30:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:49.207 17:30:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.207 17:30:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.207 17:30:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.207 17:30:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:49.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:20:49.207 00:20:49.207 --- 10.0.0.2 ping statistics --- 00:20:49.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.207 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:20:49.207 17:30:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:20:49.207 00:20:49.207 --- 10.0.0.1 ping statistics --- 00:20:49.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.207 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:20:49.207 17:30:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.207 17:30:56 -- nvmf/common.sh@410 -- # return 0 00:20:49.207 17:30:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:49.207 17:30:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.207 17:30:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:49.207 17:30:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.207 17:30:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:49.207 17:30:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:49.207 17:30:56 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:49.207 17:30:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:49.207 17:30:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:49.207 17:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:49.207 17:30:57 -- nvmf/common.sh@469 -- # nvmfpid=3210743 00:20:49.207 17:30:57 -- nvmf/common.sh@470 -- # waitforlisten 3210743 00:20:49.207 17:30:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.207 17:30:57 -- common/autotest_common.sh@819 -- # '[' -z 3210743 ']' 00:20:49.207 17:30:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.207 17:30:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.207 17:30:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.207 17:30:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.207 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.207 [2024-10-13 17:30:57.061595] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:49.207 [2024-10-13 17:30:57.061663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.207 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.207 [2024-10-13 17:30:57.135864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.207 [2024-10-13 17:30:57.175769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:49.207 [2024-10-13 17:30:57.175910] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.207 [2024-10-13 17:30:57.175922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.207 [2024-10-13 17:30:57.175933] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.207 [2024-10-13 17:30:57.176105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.207 [2024-10-13 17:30:57.176171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.207 [2024-10-13 17:30:57.176496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.207 [2024-10-13 17:30:57.176498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.467 17:30:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.467 17:30:57 -- common/autotest_common.sh@852 -- # return 0 00:20:49.467 17:30:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:49.467 17:30:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 17:30:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.467 17:30:57 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 [2024-10-13 17:30:57.900425] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 Malloc0 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 [2024-10-13 17:30:57.959644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:49.467 test case1: single bdev can't be used in multiple subsystems 00:20:49.467 17:30:57 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 17:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.467 17:30:57 -- target/nmic.sh@28 -- # nmic_status=0 00:20:49.467 17:30:57 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:49.467 17:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.467 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.727 [2024-10-13 17:30:57.995600] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:49.727 [2024-10-13 17:30:57.995623] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:49.728 [2024-10-13 17:30:57.995631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.728 request: 00:20:49.728 { 00:20:49.728 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.728 "namespace": { 00:20:49.728 "bdev_name": "Malloc0" 00:20:49.728 }, 00:20:49.728 "method": "nvmf_subsystem_add_ns", 00:20:49.728 "req_id": 1 00:20:49.728 } 00:20:49.728 Got JSON-RPC error response 00:20:49.728 response: 00:20:49.728 { 00:20:49.728 "code": -32602, 00:20:49.728 "message": "Invalid parameters" 00:20:49.728 } 00:20:49.728 17:30:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:49.728 17:30:58 -- target/nmic.sh@29 -- # nmic_status=1 00:20:49.728 17:30:58 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:49.728 17:30:58 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:49.728 Adding namespace failed - expected result. 00:20:49.728 17:30:58 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:49.728 test case2: host connect to nvmf target in multiple paths 00:20:49.728 17:30:58 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:49.728 17:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.728 17:30:58 -- common/autotest_common.sh@10 -- # set +x 00:20:49.728 [2024-10-13 17:30:58.007743] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:49.728 17:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.728 17:30:58 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:51.110 17:30:59 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:53.022 17:31:01 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:53.022 17:31:01 -- common/autotest_common.sh@1177 -- # local i=0 00:20:53.022 17:31:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:53.022 17:31:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:53.022 17:31:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:54.932 17:31:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:54.932 17:31:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:54.932 17:31:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:54.932 17:31:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:54.932 17:31:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.932 17:31:03 -- common/autotest_common.sh@1187 -- # return 0 00:20:54.932 17:31:03 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:54.932 [global] 00:20:54.932 thread=1 00:20:54.932 invalidate=1 00:20:54.932 rw=write 00:20:54.932 time_based=1 00:20:54.932 runtime=1 00:20:54.932 ioengine=libaio 00:20:54.932 direct=1 00:20:54.932 bs=4096 00:20:54.933 iodepth=1 00:20:54.933 norandommap=0 00:20:54.933 numjobs=1 00:20:54.933 00:20:54.933 verify_dump=1 00:20:54.933 verify_backlog=512 00:20:54.933 verify_state_save=0 00:20:54.933 do_verify=1 00:20:54.933 verify=crc32c-intel 00:20:54.933 [job0] 00:20:54.933 filename=/dev/nvme0n1 00:20:54.933 Could not set queue depth (nvme0n1) 00:20:55.192 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.192 fio-3.35 00:20:55.192 Starting 1 thread 00:20:56.134 00:20:56.134 job0: (groupid=0, jobs=1): err= 0: pid=3212211: Sun Oct 13 17:31:04 2024 00:20:56.134 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:56.134 slat (nsec): min=7018, max=61631, avg=26338.46, stdev=3270.93 00:20:56.134 clat (usec): min=563, max=1266, avg=993.55, stdev=77.08 00:20:56.134 lat (usec): min=570, max=1292, avg=1019.89, stdev=77.98 00:20:56.134 clat percentiles (usec): 00:20:56.134 | 1.00th=[ 717], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 955], 00:20:56.134 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:20:56.134 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:20:56.134 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1270], 99.95th=[ 1270], 00:20:56.134 | 99.99th=[ 1270] 00:20:56.134 write: IOPS=734, BW=2937KiB/s (3008kB/s)(2940KiB/1001msec); 0 zone resets 00:20:56.134 slat (usec): min=9, max=27687, avg=65.43, stdev=1020.31 00:20:56.134 clat (usec): min=302, max=842, avg=572.40, stdev=98.08 00:20:56.134 lat (usec): min=336, max=28213, avg=637.83, stdev=1023.82 00:20:56.134 clat percentiles (usec): 00:20:56.134 | 1.00th=[ 343], 5.00th=[ 404], 10.00th=[ 429], 20.00th=[ 474], 00:20:56.134 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:20:56.134 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 709], 00:20:56.134 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 840], 99.95th=[ 840], 00:20:56.134 | 99.99th=[ 840] 00:20:56.134 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:56.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:56.134 lat (usec) : 500=15.32%, 750=43.79%, 1000=20.61% 00:20:56.134 lat (msec) : 2=20.29% 00:20:56.134 cpu : usr=2.00%, sys=3.20%, ctx=1249, majf=0, minf=1 00:20:56.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.135 issued rwts: total=512,735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.135 00:20:56.135 Run status group 0 (all jobs): 00:20:56.135 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:20:56.135 WRITE: bw=2937KiB/s (3008kB/s), 2937KiB/s-2937KiB/s (3008kB/s-3008kB/s), io=2940KiB (3011kB), run=1001-1001msec 00:20:56.135 00:20:56.135 Disk stats (read/write): 00:20:56.135 nvme0n1: ios=538/571, merge=0/0, ticks=1505/315, in_queue=1820, util=98.70% 00:20:56.135 17:31:04 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:56.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:56.396 17:31:04 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:56.396 17:31:04 -- common/autotest_common.sh@1198 -- # local i=0 00:20:56.396 17:31:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:56.396 17:31:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:56.396 17:31:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:56.396 17:31:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:56.396 17:31:04 -- common/autotest_common.sh@1210 -- # return 0 00:20:56.396 17:31:04 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:56.396 17:31:04 -- target/nmic.sh@53 -- # nvmftestfini 00:20:56.396 17:31:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:56.396 17:31:04 -- nvmf/common.sh@116 -- # sync 00:20:56.396 17:31:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:56.396 17:31:04 -- nvmf/common.sh@119 -- # set +e 00:20:56.396 17:31:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:56.396 17:31:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:56.396 rmmod nvme_tcp 00:20:56.396 rmmod nvme_fabrics 00:20:56.396 rmmod nvme_keyring 00:20:56.396 17:31:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:56.396 17:31:04 -- nvmf/common.sh@123 -- # set -e 00:20:56.396 17:31:04 -- nvmf/common.sh@124 -- # return 0 00:20:56.396 17:31:04 -- nvmf/common.sh@477 -- # '[' -n 3210743 ']' 00:20:56.396 17:31:04 -- nvmf/common.sh@478 -- # killprocess 3210743 00:20:56.396 17:31:04 -- common/autotest_common.sh@926 -- # '[' -z 3210743 ']' 00:20:56.396 17:31:04 -- common/autotest_common.sh@930 -- # kill -0 3210743 00:20:56.396 17:31:04 -- common/autotest_common.sh@931 -- # uname 00:20:56.396 17:31:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:56.396 17:31:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3210743 00:20:56.396 17:31:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:56.396 17:31:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:56.396 17:31:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3210743' 00:20:56.396 killing process with pid 3210743 00:20:56.396 17:31:04 -- common/autotest_common.sh@945 -- # kill 3210743 00:20:56.396 17:31:04 -- common/autotest_common.sh@950 -- # wait 3210743 00:20:56.656 17:31:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:56.656 17:31:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:56.656 17:31:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:56.656 17:31:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.656 17:31:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:56.656 17:31:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.656 17:31:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.656 17:31:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.201 17:31:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:59.201 00:20:59.201 real 0m17.756s 00:20:59.201 user 0m44.901s 00:20:59.201 sys 0m6.462s 00:20:59.201 17:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.201 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 ************************************ 00:20:59.201 END TEST nvmf_nmic 00:20:59.201 ************************************ 00:20:59.201 17:31:07 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:59.201 17:31:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:59.201 17:31:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:59.201 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 ************************************ 00:20:59.201 START TEST nvmf_fio_target 00:20:59.201 ************************************ 00:20:59.201 17:31:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:59.201 * Looking for test storage... 00:20:59.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:59.202 17:31:07 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.202 17:31:07 -- nvmf/common.sh@7 -- # uname -s 00:20:59.202 17:31:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.202 17:31:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.202 17:31:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.202 17:31:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.202 17:31:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.202 17:31:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.202 17:31:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.202 17:31:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.202 17:31:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.202 17:31:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.202 17:31:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.202 17:31:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.202 17:31:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.202 17:31:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.202 17:31:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.202 17:31:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.202 17:31:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.202 17:31:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.202 17:31:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.202 17:31:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.202 17:31:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.202 17:31:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.202 17:31:07 -- paths/export.sh@5 -- # export PATH 00:20:59.202 17:31:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.202 17:31:07 -- nvmf/common.sh@46 -- # : 0 00:20:59.202 17:31:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:59.202 17:31:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:59.202 17:31:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:59.202 17:31:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.202 17:31:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.202 17:31:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:59.202 17:31:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:59.202 17:31:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:59.202 17:31:07 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.202 17:31:07 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.202 17:31:07 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.202 17:31:07 -- target/fio.sh@16 -- # nvmftestinit 00:20:59.202 17:31:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:59.202 17:31:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.202 17:31:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:59.202 17:31:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:59.202 17:31:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:59.202 17:31:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.202 17:31:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.202 17:31:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.202 17:31:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:59.202 17:31:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:59.202 17:31:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:59.202 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 17:31:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:05.789 17:31:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:05.789 17:31:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:05.789 17:31:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:05.789 17:31:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:05.789 17:31:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:05.789 17:31:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:05.789 17:31:14 -- nvmf/common.sh@294 -- # net_devs=() 00:21:05.789 17:31:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:05.789 17:31:14 -- nvmf/common.sh@295 -- # e810=() 00:21:05.789 17:31:14 -- nvmf/common.sh@295 -- # local -ga e810 00:21:05.789 17:31:14 -- nvmf/common.sh@296 -- # x722=() 00:21:05.789 17:31:14 -- nvmf/common.sh@296 -- # local -ga x722 00:21:05.789 17:31:14 -- nvmf/common.sh@297 -- # mlx=() 00:21:05.789 17:31:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:05.789 17:31:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.789 17:31:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:05.789 17:31:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:05.789 17:31:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:05.789 17:31:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.789 17:31:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:05.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:05.789 17:31:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.789 17:31:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:05.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:05.789 17:31:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:05.789 17:31:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.789 17:31:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.789 17:31:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.789 17:31:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.789 17:31:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:05.789 Found net devices under 0000:31:00.0: cvl_0_0 00:21:05.789 17:31:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.789 17:31:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.789 17:31:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.789 17:31:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.789 17:31:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.789 17:31:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:05.789 Found net devices under 0000:31:00.1: cvl_0_1 00:21:05.789 17:31:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.789 17:31:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:05.789 17:31:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:05.789 17:31:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:05.789 17:31:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:05.789 17:31:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.789 17:31:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.789 17:31:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.789 17:31:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:05.789 17:31:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.789 17:31:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.789 17:31:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:05.789 17:31:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.789 17:31:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.789 17:31:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:05.789 17:31:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:05.789 17:31:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.789 17:31:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.050 17:31:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.050 17:31:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.050 17:31:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:06.050 17:31:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.050 17:31:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.050 17:31:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.050 17:31:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:06.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:21:06.050 00:21:06.050 --- 10.0.0.2 ping statistics --- 00:21:06.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.050 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:21:06.050 17:31:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:21:06.050 00:21:06.050 --- 10.0.0.1 ping statistics --- 00:21:06.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.050 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:06.050 17:31:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.050 17:31:14 -- nvmf/common.sh@410 -- # return 0 00:21:06.050 17:31:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:06.050 17:31:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.050 17:31:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:06.050 17:31:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:06.050 17:31:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.050 17:31:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:06.050 17:31:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:06.311 17:31:14 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:06.311 17:31:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:06.311 17:31:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:06.311 17:31:14 -- common/autotest_common.sh@10 -- # set +x 00:21:06.311 17:31:14 -- nvmf/common.sh@469 -- # nvmfpid=3216648 00:21:06.311 17:31:14 -- nvmf/common.sh@470 -- # waitforlisten 3216648 00:21:06.311 17:31:14 -- common/autotest_common.sh@819 -- # '[' -z 3216648 ']' 00:21:06.311 17:31:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.311 17:31:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.311 17:31:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.311 17:31:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.311 17:31:14 -- common/autotest_common.sh@10 -- # set +x 00:21:06.311 17:31:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:06.311 [2024-10-13 17:31:14.685881] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:06.311 [2024-10-13 17:31:14.685951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.311 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.311 [2024-10-13 17:31:14.760892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.311 [2024-10-13 17:31:14.799838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:06.311 [2024-10-13 17:31:14.799981] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.311 [2024-10-13 17:31:14.799992] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.311 [2024-10-13 17:31:14.800001] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.311 [2024-10-13 17:31:14.800136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.311 [2024-10-13 17:31:14.800299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.311 [2024-10-13 17:31:14.800458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.311 [2024-10-13 17:31:14.800458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.253 17:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.253 17:31:15 -- common/autotest_common.sh@852 -- # return 0 00:21:07.253 17:31:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:07.253 17:31:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:07.253 17:31:15 -- common/autotest_common.sh@10 -- # set +x 00:21:07.253 17:31:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.253 17:31:15 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:07.253 [2024-10-13 17:31:15.645852] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.253 17:31:15 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:07.512 17:31:15 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:07.512 17:31:15 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:07.773 17:31:16 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:07.773 17:31:16 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:07.773 17:31:16 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:07.773 17:31:16 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:08.034 17:31:16 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:08.034 17:31:16 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:08.294 17:31:16 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:08.294 17:31:16 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:08.294 17:31:16 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:08.555 17:31:16 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:08.555 17:31:16 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:08.815 17:31:17 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:08.815 17:31:17 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:08.815 17:31:17 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:09.075 17:31:17 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:09.075 17:31:17 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.335 17:31:17 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:09.336 17:31:17 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:09.336 17:31:17 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.600 [2024-10-13 17:31:17.967551] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.600 17:31:17 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:09.860 17:31:18 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:09.860 17:31:18 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:11.771 17:31:19 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:11.771 17:31:19 -- common/autotest_common.sh@1177 -- # local i=0 00:21:11.771 17:31:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:11.771 17:31:19 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:21:11.771 17:31:19 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:21:11.771 17:31:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:13.710 17:31:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:13.710 17:31:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:13.710 17:31:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:13.710 17:31:21 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:21:13.710 17:31:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.710 17:31:21 -- common/autotest_common.sh@1187 -- # return 0 00:21:13.710 17:31:21 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:13.710 [global] 00:21:13.710 thread=1 00:21:13.710 invalidate=1 00:21:13.710 rw=write 00:21:13.710 time_based=1 00:21:13.710 runtime=1 00:21:13.710 ioengine=libaio 00:21:13.710 direct=1 00:21:13.710 bs=4096 00:21:13.710 iodepth=1 00:21:13.710 norandommap=0 00:21:13.710 numjobs=1 00:21:13.710 00:21:13.710 verify_dump=1 00:21:13.710 verify_backlog=512 00:21:13.710 verify_state_save=0 00:21:13.710 do_verify=1 00:21:13.710 verify=crc32c-intel 00:21:13.710 [job0] 00:21:13.710 filename=/dev/nvme0n1 00:21:13.710 [job1] 00:21:13.710 filename=/dev/nvme0n2 00:21:13.710 [job2] 00:21:13.710 filename=/dev/nvme0n3 00:21:13.710 [job3] 00:21:13.710 filename=/dev/nvme0n4 00:21:13.710 Could not set queue depth (nvme0n1) 00:21:13.710 Could not set queue depth (nvme0n2) 00:21:13.710 Could not set queue depth (nvme0n3) 00:21:13.710 Could not set queue depth (nvme0n4) 00:21:13.974 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:13.974 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:13.974 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:13.974 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:13.974 fio-3.35 00:21:13.974 Starting 4 threads 00:21:15.388 00:21:15.388 job0: (groupid=0, jobs=1): err= 0: pid=3218536: Sun Oct 13 17:31:23 2024 00:21:15.388 read: IOPS=18, BW=74.7KiB/s (76.4kB/s)(76.0KiB/1018msec) 00:21:15.388 slat (nsec): min=10164, max=27392, avg=26219.58, stdev=3889.69 00:21:15.388 clat (usec): min=40864, max=41075, avg=40967.74, stdev=58.21 00:21:15.388 lat (usec): min=40891, max=41103, avg=40993.96, stdev=57.40 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:15.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:15.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:15.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:15.388 | 99.99th=[41157] 00:21:15.388 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:21:15.388 slat (nsec): min=9879, max=60560, avg=25925.98, stdev=12535.59 00:21:15.388 clat (usec): min=121, max=742, avg=434.26, stdev=103.07 00:21:15.388 lat (usec): min=133, max=777, avg=460.18, stdev=109.64 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 184], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 330], 00:21:15.388 | 30.00th=[ 383], 40.00th=[ 412], 50.00th=[ 441], 60.00th=[ 474], 00:21:15.388 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 594], 00:21:15.388 | 99.00th=[ 660], 99.50th=[ 725], 99.90th=[ 742], 99.95th=[ 742], 00:21:15.388 | 99.99th=[ 742] 00:21:15.388 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:21:15.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:15.388 lat (usec) : 250=1.69%, 500=66.48%, 750=28.25% 00:21:15.388 lat (msec) : 50=3.58% 00:21:15.388 cpu : usr=0.69%, sys=1.18%, ctx=535, majf=0, minf=1 00:21:15.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:15.388 job1: (groupid=0, jobs=1): err= 0: pid=3218549: Sun Oct 13 17:31:23 2024 00:21:15.388 read: IOPS=149, BW=599KiB/s (614kB/s)(600KiB/1001msec) 00:21:15.388 slat (nsec): min=7121, max=61449, avg=25613.05, stdev=6700.47 00:21:15.388 clat (usec): min=476, max=42268, avg=4887.12, stdev=12342.13 00:21:15.388 lat (usec): min=503, max=42294, avg=4912.73, stdev=12342.42 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 529], 5.00th=[ 627], 10.00th=[ 652], 20.00th=[ 685], 00:21:15.388 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 848], 00:21:15.388 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 1037], 95.00th=[41681], 00:21:15.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:15.388 | 99.99th=[42206] 00:21:15.388 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:21:15.388 slat (nsec): min=9864, max=65474, avg=32040.85, stdev=9207.22 00:21:15.388 clat (usec): min=215, max=788, avg=472.90, stdev=111.90 00:21:15.388 lat (usec): min=226, max=841, avg=504.94, stdev=115.26 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 249], 5.00th=[ 281], 10.00th=[ 326], 20.00th=[ 379], 00:21:15.388 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[ 469], 60.00th=[ 502], 00:21:15.388 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 652], 00:21:15.388 | 99.00th=[ 742], 99.50th=[ 775], 99.90th=[ 791], 99.95th=[ 791], 00:21:15.388 | 99.99th=[ 791] 00:21:15.388 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:21:15.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:15.388 lat (usec) : 250=1.06%, 500=45.32%, 750=36.71%, 1000=14.50% 00:21:15.388 lat (msec) : 2=0.15%, 50=2.27% 00:21:15.388 cpu : usr=1.00%, sys=2.00%, ctx=663, majf=0, minf=1 00:21:15.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 issued rwts: total=150,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:15.388 job2: (groupid=0, jobs=1): err= 0: pid=3218551: Sun Oct 13 17:31:23 2024 00:21:15.388 read: IOPS=17, BW=70.1KiB/s (71.8kB/s)(72.0KiB/1027msec) 00:21:15.388 slat (nsec): min=8484, max=25817, avg=24638.44, stdev=4034.30 00:21:15.388 clat (usec): min=1091, max=42010, avg=39483.80, stdev=9589.85 00:21:15.388 lat (usec): min=1117, max=42035, avg=39508.44, stdev=9589.66 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[40633], 20.00th=[41157], 00:21:15.388 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:21:15.388 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:15.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:15.388 | 99.99th=[42206] 00:21:15.388 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:21:15.388 slat (nsec): min=9739, max=51350, avg=31248.15, stdev=7017.74 00:21:15.388 clat (usec): min=184, max=921, avg=577.55, stdev=134.59 00:21:15.388 lat (usec): min=195, max=953, avg=608.80, stdev=136.44 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 262], 5.00th=[ 330], 10.00th=[ 400], 20.00th=[ 461], 00:21:15.388 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 635], 00:21:15.388 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 766], 00:21:15.388 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:21:15.388 | 99.99th=[ 922] 00:21:15.388 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:21:15.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:15.388 lat (usec) : 250=0.94%, 500=26.60%, 750=62.08%, 1000=6.98% 00:21:15.388 lat (msec) : 2=0.19%, 50=3.21% 00:21:15.388 cpu : usr=0.88%, sys=1.46%, ctx=530, majf=0, minf=2 00:21:15.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:15.388 job3: (groupid=0, jobs=1): err= 0: pid=3218552: Sun Oct 13 17:31:23 2024 00:21:15.388 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:21:15.388 slat (nsec): min=27275, max=28558, avg=27670.78, stdev=283.17 00:21:15.388 clat (usec): min=1046, max=42064, avg=39544.09, stdev=9613.47 00:21:15.388 lat (usec): min=1073, max=42093, avg=39571.76, stdev=9613.51 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 1045], 5.00th=[ 1045], 10.00th=[41157], 20.00th=[41157], 00:21:15.388 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:21:15.388 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:15.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:15.388 | 99.99th=[42206] 00:21:15.388 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:21:15.388 slat (usec): min=9, max=15990, avg=61.76, stdev=705.43 00:21:15.388 clat (usec): min=140, max=1399, avg=566.70, stdev=140.89 00:21:15.388 lat (usec): min=150, max=16439, avg=628.46, stdev=715.04 00:21:15.388 clat percentiles (usec): 00:21:15.388 | 1.00th=[ 241], 5.00th=[ 334], 10.00th=[ 396], 20.00th=[ 449], 00:21:15.388 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 603], 00:21:15.388 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 783], 00:21:15.388 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 1401], 99.95th=[ 1401], 00:21:15.388 | 99.99th=[ 1401] 00:21:15.388 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:21:15.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:15.388 lat (usec) : 250=1.32%, 500=29.25%, 750=58.30%, 1000=7.36% 00:21:15.388 lat (msec) : 2=0.57%, 50=3.21% 00:21:15.388 cpu : usr=1.35%, sys=1.54%, ctx=532, majf=0, minf=1 00:21:15.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.388 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:15.388 00:21:15.388 Run status group 0 (all jobs): 00:21:15.389 READ: bw=790KiB/s (809kB/s), 69.4KiB/s-599KiB/s (71.0kB/s-614kB/s), io=820KiB (840kB), run=1001-1038msec 00:21:15.389 WRITE: bw=7892KiB/s (8082kB/s), 1973KiB/s-2046KiB/s (2020kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1038msec 00:21:15.389 00:21:15.389 Disk stats (read/write): 00:21:15.389 nvme0n1: ios=36/512, merge=0/0, ticks=1412/212, in_queue=1624, util=83.87% 00:21:15.389 nvme0n2: ios=41/512, merge=0/0, ticks=1424/235, in_queue=1659, util=87.84% 00:21:15.389 nvme0n3: ios=70/512, merge=0/0, ticks=598/285, in_queue=883, util=94.60% 00:21:15.389 nvme0n4: ios=63/512, merge=0/0, ticks=742/228, in_queue=970, util=96.15% 00:21:15.389 17:31:23 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:15.389 [global] 00:21:15.389 thread=1 00:21:15.389 invalidate=1 00:21:15.389 rw=randwrite 00:21:15.389 time_based=1 00:21:15.389 runtime=1 00:21:15.389 ioengine=libaio 00:21:15.389 direct=1 00:21:15.389 bs=4096 00:21:15.389 iodepth=1 00:21:15.389 norandommap=0 00:21:15.389 numjobs=1 00:21:15.389 00:21:15.389 verify_dump=1 00:21:15.389 verify_backlog=512 00:21:15.389 verify_state_save=0 00:21:15.389 do_verify=1 00:21:15.389 verify=crc32c-intel 00:21:15.389 [job0] 00:21:15.389 filename=/dev/nvme0n1 00:21:15.389 [job1] 00:21:15.389 filename=/dev/nvme0n2 00:21:15.389 [job2] 00:21:15.389 filename=/dev/nvme0n3 00:21:15.389 [job3] 00:21:15.389 filename=/dev/nvme0n4 00:21:15.389 Could not set queue depth (nvme0n1) 00:21:15.389 Could not set queue depth (nvme0n2) 00:21:15.389 Could not set queue depth (nvme0n3) 00:21:15.389 Could not set queue depth (nvme0n4) 00:21:15.657 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:15.657 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:15.657 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:15.657 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:15.657 fio-3.35 00:21:15.657 Starting 4 threads 00:21:17.062 00:21:17.062 job0: (groupid=0, jobs=1): err= 0: pid=3219029: Sun Oct 13 17:31:25 2024 00:21:17.062 read: IOPS=19, BW=78.0KiB/s (79.8kB/s)(80.0KiB/1026msec) 00:21:17.062 slat (nsec): min=8433, max=28965, avg=25641.05, stdev=4097.36 00:21:17.062 clat (usec): min=735, max=42007, avg=39436.54, stdev=9121.89 00:21:17.062 lat (usec): min=764, max=42033, avg=39462.18, stdev=9121.12 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 734], 5.00th=[ 734], 10.00th=[40633], 20.00th=[40633], 00:21:17.062 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:21:17.062 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:17.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:17.062 | 99.99th=[42206] 00:21:17.062 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:21:17.062 slat (nsec): min=9397, max=51453, avg=27837.88, stdev=9406.74 00:21:17.062 clat (usec): min=213, max=616, avg=426.90, stdev=82.75 00:21:17.062 lat (usec): min=246, max=649, avg=454.74, stdev=87.01 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 239], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 355], 00:21:17.062 | 30.00th=[ 388], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 457], 00:21:17.062 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 553], 00:21:17.062 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 619], 99.95th=[ 619], 00:21:17.062 | 99.99th=[ 619] 00:21:17.062 bw ( KiB/s): min= 4096, max= 4096, per=46.60%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.062 lat (usec) : 250=1.88%, 500=76.50%, 750=18.05% 00:21:17.062 lat (msec) : 50=3.57% 00:21:17.062 cpu : usr=0.68%, sys=1.56%, ctx=532, majf=0, minf=1 00:21:17.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.062 job1: (groupid=0, jobs=1): err= 0: pid=3219044: Sun Oct 13 17:31:25 2024 00:21:17.062 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:17.062 slat (nsec): min=7462, max=46315, avg=27256.14, stdev=1793.51 00:21:17.062 clat (usec): min=664, max=1146, avg=963.75, stdev=52.08 00:21:17.062 lat (usec): min=691, max=1173, avg=991.01, stdev=52.27 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 799], 5.00th=[ 881], 10.00th=[ 914], 20.00th=[ 938], 00:21:17.062 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:21:17.062 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1012], 95.00th=[ 1037], 00:21:17.062 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1139], 99.95th=[ 1139], 00:21:17.062 | 99.99th=[ 1139] 00:21:17.062 write: IOPS=746, BW=2985KiB/s (3057kB/s)(2988KiB/1001msec); 0 zone resets 00:21:17.062 slat (nsec): min=9314, max=56181, avg=31953.68, stdev=8730.22 00:21:17.062 clat (usec): min=224, max=960, avg=613.81, stdev=121.78 00:21:17.062 lat (usec): min=235, max=995, avg=645.76, stdev=124.83 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 359], 5.00th=[ 412], 10.00th=[ 449], 20.00th=[ 506], 00:21:17.062 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:21:17.062 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 832], 00:21:17.062 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 963], 99.95th=[ 963], 00:21:17.062 | 99.99th=[ 963] 00:21:17.062 bw ( KiB/s): min= 4096, max= 4096, per=46.60%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.062 lat (usec) : 250=0.08%, 500=10.48%, 750=41.86%, 1000=40.51% 00:21:17.062 lat (msec) : 2=7.07% 00:21:17.062 cpu : usr=2.70%, sys=5.00%, ctx=1260, majf=0, minf=1 00:21:17.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 issued rwts: total=512,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.062 job2: (groupid=0, jobs=1): err= 0: pid=3219064: Sun Oct 13 17:31:25 2024 00:21:17.062 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:21:17.062 slat (nsec): min=26321, max=27407, avg=26797.58, stdev=304.97 00:21:17.062 clat (usec): min=852, max=42009, avg=39202.47, stdev=9298.68 00:21:17.062 lat (usec): min=879, max=42036, avg=39229.27, stdev=9298.69 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 857], 5.00th=[ 857], 10.00th=[40633], 20.00th=[41157], 00:21:17.062 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:17.062 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:21:17.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:17.062 | 99.99th=[42206] 00:21:17.062 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:21:17.062 slat (nsec): min=9782, max=52353, avg=30570.92, stdev=8658.40 00:21:17.062 clat (usec): min=152, max=869, avg=471.45, stdev=124.12 00:21:17.062 lat (usec): min=163, max=902, avg=502.02, stdev=127.36 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 198], 5.00th=[ 269], 10.00th=[ 306], 20.00th=[ 375], 00:21:17.062 | 30.00th=[ 404], 40.00th=[ 429], 50.00th=[ 461], 60.00th=[ 502], 00:21:17.062 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 635], 95.00th=[ 685], 00:21:17.062 | 99.00th=[ 758], 99.50th=[ 799], 99.90th=[ 873], 99.95th=[ 873], 00:21:17.062 | 99.99th=[ 873] 00:21:17.062 bw ( KiB/s): min= 4096, max= 4096, per=46.60%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.062 lat (usec) : 250=2.64%, 500=54.61%, 750=37.85%, 1000=1.51% 00:21:17.062 lat (msec) : 50=3.39% 00:21:17.062 cpu : usr=0.70%, sys=1.69%, ctx=532, majf=0, minf=1 00:21:17.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.062 job3: (groupid=0, jobs=1): err= 0: pid=3219071: Sun Oct 13 17:31:25 2024 00:21:17.062 read: IOPS=17, BW=69.3KiB/s (71.0kB/s)(72.0KiB/1039msec) 00:21:17.062 slat (nsec): min=25280, max=25857, avg=25593.00, stdev=168.26 00:21:17.062 clat (usec): min=1241, max=42025, avg=39663.30, stdev=9590.36 00:21:17.062 lat (usec): min=1267, max=42051, avg=39688.89, stdev=9590.34 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 1237], 5.00th=[ 1237], 10.00th=[41157], 20.00th=[41681], 00:21:17.062 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:21:17.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:17.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:17.062 | 99.99th=[42206] 00:21:17.062 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:21:17.062 slat (nsec): min=9512, max=65433, avg=29484.23, stdev=8324.89 00:21:17.062 clat (usec): min=261, max=934, avg=596.53, stdev=125.71 00:21:17.062 lat (usec): min=273, max=983, avg=626.01, stdev=128.50 00:21:17.062 clat percentiles (usec): 00:21:17.062 | 1.00th=[ 302], 5.00th=[ 400], 10.00th=[ 429], 20.00th=[ 478], 00:21:17.062 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 635], 00:21:17.062 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 807], 00:21:17.062 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:21:17.062 | 99.99th=[ 938] 00:21:17.062 bw ( KiB/s): min= 4096, max= 4096, per=46.60%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.062 lat (usec) : 500=21.32%, 750=64.34%, 1000=10.94% 00:21:17.062 lat (msec) : 2=0.19%, 50=3.21% 00:21:17.062 cpu : usr=0.39%, sys=1.83%, ctx=530, majf=0, minf=1 00:21:17.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.062 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.062 00:21:17.062 Run status group 0 (all jobs): 00:21:17.062 READ: bw=2191KiB/s (2243kB/s), 69.3KiB/s-2046KiB/s (71.0kB/s-2095kB/s), io=2276KiB (2331kB), run=1001-1039msec 00:21:17.062 WRITE: bw=8789KiB/s (9000kB/s), 1971KiB/s-2985KiB/s (2018kB/s-3057kB/s), io=9132KiB (9351kB), run=1001-1039msec 00:21:17.062 00:21:17.062 Disk stats (read/write): 00:21:17.062 nvme0n1: ios=65/512, merge=0/0, ticks=631/217, in_queue=848, util=87.37% 00:21:17.062 nvme0n2: ios=544/512, merge=0/0, ticks=1430/242, in_queue=1672, util=96.64% 00:21:17.062 nvme0n3: ios=49/512, merge=0/0, ticks=1270/233, in_queue=1503, util=98.10% 00:21:17.062 nvme0n4: ios=13/512, merge=0/0, ticks=505/283, in_queue=788, util=89.63% 00:21:17.062 17:31:25 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:17.062 [global] 00:21:17.062 thread=1 00:21:17.062 invalidate=1 00:21:17.062 rw=write 00:21:17.062 time_based=1 00:21:17.062 runtime=1 00:21:17.062 ioengine=libaio 00:21:17.062 direct=1 00:21:17.062 bs=4096 00:21:17.063 iodepth=128 00:21:17.063 norandommap=0 00:21:17.063 numjobs=1 00:21:17.063 00:21:17.063 verify_dump=1 00:21:17.063 verify_backlog=512 00:21:17.063 verify_state_save=0 00:21:17.063 do_verify=1 00:21:17.063 verify=crc32c-intel 00:21:17.063 [job0] 00:21:17.063 filename=/dev/nvme0n1 00:21:17.063 [job1] 00:21:17.063 filename=/dev/nvme0n2 00:21:17.063 [job2] 00:21:17.063 filename=/dev/nvme0n3 00:21:17.063 [job3] 00:21:17.063 filename=/dev/nvme0n4 00:21:17.063 Could not set queue depth (nvme0n1) 00:21:17.063 Could not set queue depth (nvme0n2) 00:21:17.063 Could not set queue depth (nvme0n3) 00:21:17.063 Could not set queue depth (nvme0n4) 00:21:17.327 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:17.328 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:17.328 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:17.328 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:17.328 fio-3.35 00:21:17.328 Starting 4 threads 00:21:18.735 00:21:18.735 job0: (groupid=0, jobs=1): err= 0: pid=3219493: Sun Oct 13 17:31:26 2024 00:21:18.735 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:21:18.735 slat (nsec): min=945, max=9148.7k, avg=80127.67, stdev=574918.34 00:21:18.735 clat (usec): min=3222, max=22721, avg=10741.96, stdev=2488.00 00:21:18.735 lat (usec): min=4414, max=23351, avg=10822.09, stdev=2516.63 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 6718], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 8979], 00:21:18.735 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10421], 00:21:18.735 | 70.00th=[11863], 80.00th=[13042], 90.00th=[13960], 95.00th=[15795], 00:21:18.735 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:21:18.735 | 99.99th=[22676] 00:21:18.735 write: IOPS=6428, BW=25.1MiB/s (26.3MB/s)(25.3MiB/1007msec); 0 zone resets 00:21:18.735 slat (nsec): min=1618, max=9518.0k, avg=72651.25, stdev=559872.78 00:21:18.735 clat (usec): min=1226, max=19803, avg=9482.25, stdev=2330.55 00:21:18.735 lat (usec): min=1237, max=19816, avg=9554.90, stdev=2339.53 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 4228], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 7111], 00:21:18.735 | 30.00th=[ 8586], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:21:18.735 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[14222], 00:21:18.735 | 99.00th=[15795], 99.50th=[15926], 99.90th=[18220], 99.95th=[19006], 00:21:18.735 | 99.99th=[19792] 00:21:18.735 bw ( KiB/s): min=25000, max=25716, per=21.31%, avg=25358.00, stdev=506.29, samples=2 00:21:18.735 iops : min= 6250, max= 6429, avg=6339.50, stdev=126.57, samples=2 00:21:18.735 lat (msec) : 2=0.06%, 4=0.25%, 10=50.81%, 20=48.85%, 50=0.02% 00:21:18.735 cpu : usr=4.47%, sys=7.65%, ctx=333, majf=0, minf=1 00:21:18.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:18.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.735 issued rwts: total=6144,6473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.735 job1: (groupid=0, jobs=1): err= 0: pid=3219510: Sun Oct 13 17:31:26 2024 00:21:18.735 read: IOPS=8201, BW=32.0MiB/s (33.6MB/s)(32.2MiB/1006msec) 00:21:18.735 slat (nsec): min=999, max=7554.1k, avg=64093.88, stdev=471038.01 00:21:18.735 clat (usec): min=2485, max=15658, avg=8085.96, stdev=1991.72 00:21:18.735 lat (usec): min=2490, max=17584, avg=8150.05, stdev=2016.59 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 4047], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6783], 00:21:18.735 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7963], 00:21:18.735 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[11994], 00:21:18.735 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14615], 99.95th=[14615], 00:21:18.735 | 99.99th=[15664] 00:21:18.735 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec); 0 zone resets 00:21:18.735 slat (nsec): min=1671, max=6742.6k, avg=51388.42, stdev=356766.18 00:21:18.735 clat (usec): min=1170, max=15155, avg=7002.85, stdev=1685.63 00:21:18.735 lat (usec): min=1178, max=15158, avg=7054.24, stdev=1677.16 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 2073], 5.00th=[ 3523], 10.00th=[ 4621], 20.00th=[ 5735], 00:21:18.735 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7635], 00:21:18.735 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8029], 95.00th=[ 8225], 00:21:18.735 | 99.00th=[11207], 99.50th=[12518], 99.90th=[14222], 99.95th=[15139], 00:21:18.735 | 99.99th=[15139] 00:21:18.735 bw ( KiB/s): min=34136, max=34944, per=29.03%, avg=34540.00, stdev=571.34, samples=2 00:21:18.735 iops : min= 8534, max= 8736, avg=8635.00, stdev=142.84, samples=2 00:21:18.735 lat (msec) : 2=0.39%, 4=3.21%, 10=86.81%, 20=9.58% 00:21:18.735 cpu : usr=4.48%, sys=4.58%, ctx=652, majf=0, minf=2 00:21:18.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:18.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.735 issued rwts: total=8251,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.735 job2: (groupid=0, jobs=1): err= 0: pid=3219531: Sun Oct 13 17:31:26 2024 00:21:18.735 read: IOPS=7164, BW=28.0MiB/s (29.3MB/s)(28.2MiB/1008msec) 00:21:18.735 slat (nsec): min=949, max=8094.1k, avg=69268.54, stdev=524139.34 00:21:18.735 clat (usec): min=2109, max=16645, avg=9120.60, stdev=2205.86 00:21:18.735 lat (usec): min=2115, max=16661, avg=9189.87, stdev=2225.07 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7504], 00:21:18.735 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:21:18.735 | 70.00th=[ 9765], 80.00th=[10945], 90.00th=[12125], 95.00th=[13435], 00:21:18.735 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16188], 99.95th=[16319], 00:21:18.735 | 99.99th=[16581] 00:21:18.735 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:21:18.735 slat (nsec): min=1644, max=7371.1k, avg=60068.34, stdev=393981.02 00:21:18.735 clat (usec): min=1175, max=17302, avg=8079.38, stdev=2076.40 00:21:18.735 lat (usec): min=1185, max=17305, avg=8139.45, stdev=2079.72 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 2868], 5.00th=[ 4490], 10.00th=[ 5211], 20.00th=[ 6521], 00:21:18.735 | 30.00th=[ 7439], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8717], 00:21:18.735 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[11863], 00:21:18.735 | 99.00th=[14484], 99.50th=[15533], 99.90th=[16581], 99.95th=[17171], 00:21:18.735 | 99.99th=[17433] 00:21:18.735 bw ( KiB/s): min=30248, max=30608, per=25.57%, avg=30428.00, stdev=254.56, samples=2 00:21:18.735 iops : min= 7562, max= 7652, avg=7607.00, stdev=63.64, samples=2 00:21:18.735 lat (msec) : 2=0.13%, 4=2.13%, 10=80.26%, 20=17.48% 00:21:18.735 cpu : usr=5.66%, sys=7.45%, ctx=575, majf=0, minf=1 00:21:18.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:18.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.735 issued rwts: total=7222,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.735 job3: (groupid=0, jobs=1): err= 0: pid=3219538: Sun Oct 13 17:31:26 2024 00:21:18.735 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:21:18.735 slat (nsec): min=961, max=4560.3k, avg=74143.45, stdev=485763.00 00:21:18.735 clat (usec): min=5487, max=15188, avg=9259.31, stdev=1097.60 00:21:18.735 lat (usec): min=5642, max=15194, avg=9333.45, stdev=1171.51 00:21:18.735 clat percentiles (usec): 00:21:18.735 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[ 8848], 00:21:18.735 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:21:18.735 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[11338], 00:21:18.735 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13960], 99.95th=[14091], 00:21:18.735 | 99.99th=[15139] 00:21:18.735 write: IOPS=7091, BW=27.7MiB/s (29.0MB/s)(27.8MiB/1005msec); 0 zone resets 00:21:18.735 slat (nsec): min=1641, max=4726.0k, avg=66633.76, stdev=311262.68 00:21:18.735 clat (usec): min=4037, max=14710, avg=9193.75, stdev=1133.48 00:21:18.735 lat (usec): min=4447, max=14852, avg=9260.38, stdev=1152.65 00:21:18.736 clat percentiles (usec): 00:21:18.736 | 1.00th=[ 5473], 5.00th=[ 7308], 10.00th=[ 8291], 20.00th=[ 8717], 00:21:18.736 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:21:18.736 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[11207], 00:21:18.736 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14091], 99.95th=[14353], 00:21:18.736 | 99.99th=[14746] 00:21:18.736 bw ( KiB/s): min=27328, max=28672, per=23.53%, avg=28000.00, stdev=950.35, samples=2 00:21:18.736 iops : min= 6832, max= 7168, avg=7000.00, stdev=237.59, samples=2 00:21:18.736 lat (msec) : 10=85.81%, 20=14.19% 00:21:18.736 cpu : usr=4.88%, sys=5.48%, ctx=842, majf=0, minf=1 00:21:18.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:18.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.736 issued rwts: total=6656,7127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.736 00:21:18.736 Run status group 0 (all jobs): 00:21:18.736 READ: bw=110MiB/s (115MB/s), 23.8MiB/s-32.0MiB/s (25.0MB/s-33.6MB/s), io=110MiB (116MB), run=1005-1008msec 00:21:18.736 WRITE: bw=116MiB/s (122MB/s), 25.1MiB/s-33.8MiB/s (26.3MB/s-35.4MB/s), io=117MiB (123MB), run=1005-1008msec 00:21:18.736 00:21:18.736 Disk stats (read/write): 00:21:18.736 nvme0n1: ios=5148/5344, merge=0/0, ticks=53224/47965, in_queue=101189, util=91.18% 00:21:18.736 nvme0n2: ios=6944/7168, merge=0/0, ticks=55174/48654, in_queue=103828, util=88.58% 00:21:18.736 nvme0n3: ios=6144/6262, merge=0/0, ticks=53626/47338, in_queue=100964, util=88.29% 00:21:18.736 nvme0n4: ios=5675/5839, merge=0/0, ticks=26271/24743, in_queue=51014, util=95.83% 00:21:18.736 17:31:26 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:18.736 [global] 00:21:18.736 thread=1 00:21:18.736 invalidate=1 00:21:18.736 rw=randwrite 00:21:18.736 time_based=1 00:21:18.736 runtime=1 00:21:18.736 ioengine=libaio 00:21:18.736 direct=1 00:21:18.736 bs=4096 00:21:18.736 iodepth=128 00:21:18.736 norandommap=0 00:21:18.736 numjobs=1 00:21:18.736 00:21:18.736 verify_dump=1 00:21:18.736 verify_backlog=512 00:21:18.736 verify_state_save=0 00:21:18.736 do_verify=1 00:21:18.736 verify=crc32c-intel 00:21:18.736 [job0] 00:21:18.736 filename=/dev/nvme0n1 00:21:18.736 [job1] 00:21:18.736 filename=/dev/nvme0n2 00:21:18.736 [job2] 00:21:18.736 filename=/dev/nvme0n3 00:21:18.736 [job3] 00:21:18.736 filename=/dev/nvme0n4 00:21:18.736 Could not set queue depth (nvme0n1) 00:21:18.736 Could not set queue depth (nvme0n2) 00:21:18.736 Could not set queue depth (nvme0n3) 00:21:18.736 Could not set queue depth (nvme0n4) 00:21:19.005 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.005 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.005 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.005 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.005 fio-3.35 00:21:19.005 Starting 4 threads 00:21:20.389 00:21:20.389 job0: (groupid=0, jobs=1): err= 0: pid=3219960: Sun Oct 13 17:31:28 2024 00:21:20.389 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:21:20.389 slat (nsec): min=906, max=13172k, avg=99732.94, stdev=740807.90 00:21:20.389 clat (usec): min=3296, max=56444, avg=13278.32, stdev=9388.63 00:21:20.389 lat (usec): min=3301, max=56452, avg=13378.06, stdev=9436.59 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7373], 00:21:20.389 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10552], 00:21:20.389 | 70.00th=[13435], 80.00th=[18482], 90.00th=[26346], 95.00th=[32375], 00:21:20.389 | 99.00th=[50070], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:21:20.389 | 99.99th=[56361] 00:21:20.389 write: IOPS=5228, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec); 0 zone resets 00:21:20.389 slat (nsec): min=1555, max=10918k, avg=86172.70, stdev=546574.22 00:21:20.389 clat (usec): min=647, max=41137, avg=11183.36, stdev=6310.88 00:21:20.389 lat (usec): min=895, max=41144, avg=11269.53, stdev=6348.88 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 3032], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 6456], 00:21:20.389 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8586], 60.00th=[11207], 00:21:20.389 | 70.00th=[13304], 80.00th=[15401], 90.00th=[20841], 95.00th=[22938], 00:21:20.389 | 99.00th=[35914], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:21:20.389 | 99.99th=[41157] 00:21:20.389 bw ( KiB/s): min=11736, max=29232, per=21.08%, avg=20484.00, stdev=12371.54, samples=2 00:21:20.389 iops : min= 2934, max= 7308, avg=5121.00, stdev=3092.89, samples=2 00:21:20.389 lat (usec) : 750=0.01%, 1000=0.03% 00:21:20.389 lat (msec) : 2=0.08%, 4=1.45%, 10=55.15%, 20=29.82%, 50=12.87% 00:21:20.389 lat (msec) : 100=0.60% 00:21:20.389 cpu : usr=3.49%, sys=5.88%, ctx=387, majf=0, minf=2 00:21:20.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:20.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.389 issued rwts: total=5120,5249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.389 job1: (groupid=0, jobs=1): err= 0: pid=3219977: Sun Oct 13 17:31:28 2024 00:21:20.389 read: IOPS=6553, BW=25.6MiB/s (26.8MB/s)(25.7MiB/1005msec) 00:21:20.389 slat (nsec): min=970, max=8273.5k, avg=77216.91, stdev=518927.21 00:21:20.389 clat (usec): min=1058, max=21393, avg=9861.46, stdev=1982.97 00:21:20.389 lat (usec): min=1626, max=21406, avg=9938.67, stdev=2035.06 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 4080], 5.00th=[ 6718], 10.00th=[ 7635], 20.00th=[ 8848], 00:21:20.389 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9896], 00:21:20.389 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12125], 95.00th=[13435], 00:21:20.389 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17433], 99.95th=[18482], 00:21:20.389 | 99.99th=[21365] 00:21:20.389 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:21:20.389 slat (nsec): min=1563, max=13535k, avg=67037.98, stdev=484926.15 00:21:20.389 clat (usec): min=600, max=27631, avg=9372.61, stdev=3428.07 00:21:20.389 lat (usec): min=608, max=27792, avg=9439.65, stdev=3452.15 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 2900], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 6718], 00:21:20.389 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[ 9896], 00:21:20.389 | 70.00th=[10159], 80.00th=[10683], 90.00th=[13042], 95.00th=[14615], 00:21:20.389 | 99.00th=[22676], 99.50th=[26870], 99.90th=[27657], 99.95th=[27657], 00:21:20.389 | 99.99th=[27657] 00:21:20.389 bw ( KiB/s): min=24576, max=28672, per=27.40%, avg=26624.00, stdev=2896.31, samples=2 00:21:20.389 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:21:20.389 lat (usec) : 750=0.02% 00:21:20.389 lat (msec) : 2=0.24%, 4=1.71%, 10=61.19%, 20=35.98%, 50=0.85% 00:21:20.389 cpu : usr=5.78%, sys=6.57%, ctx=426, majf=0, minf=1 00:21:20.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:20.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.389 issued rwts: total=6586,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.389 job2: (groupid=0, jobs=1): err= 0: pid=3219993: Sun Oct 13 17:31:28 2024 00:21:20.389 read: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:21:20.389 slat (nsec): min=928, max=10668k, avg=57853.36, stdev=446223.39 00:21:20.389 clat (usec): min=672, max=18959, avg=7927.88, stdev=2626.12 00:21:20.389 lat (usec): min=1374, max=18972, avg=7985.73, stdev=2642.92 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 2024], 5.00th=[ 3687], 10.00th=[ 5211], 20.00th=[ 5997], 00:21:20.389 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8094], 00:21:20.389 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[12125], 00:21:20.389 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:21:20.389 | 99.99th=[19006] 00:21:20.389 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:21:20.389 slat (nsec): min=1533, max=11624k, avg=54596.60, stdev=429201.82 00:21:20.389 clat (usec): min=448, max=34610, avg=7633.03, stdev=4404.25 00:21:20.389 lat (usec): min=497, max=34619, avg=7687.62, stdev=4431.11 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 1205], 5.00th=[ 2638], 10.00th=[ 3556], 20.00th=[ 4686], 00:21:20.389 | 30.00th=[ 5669], 40.00th=[ 6325], 50.00th=[ 7046], 60.00th=[ 7504], 00:21:20.389 | 70.00th=[ 7832], 80.00th=[ 8979], 90.00th=[13042], 95.00th=[17171], 00:21:20.389 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24511], 99.95th=[26346], 00:21:20.389 | 99.99th=[34866] 00:21:20.389 bw ( KiB/s): min=31584, max=33952, per=33.72%, avg=32768.00, stdev=1674.43, samples=2 00:21:20.389 iops : min= 7896, max= 8488, avg=8192.00, stdev=418.61, samples=2 00:21:20.389 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.15% 00:21:20.389 lat (msec) : 2=2.09%, 4=8.43%, 10=72.81%, 20=14.81%, 50=1.66% 00:21:20.389 cpu : usr=5.38%, sys=9.07%, ctx=533, majf=0, minf=1 00:21:20.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:20.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.389 issued rwts: total=8184,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.389 job3: (groupid=0, jobs=1): err= 0: pid=3219994: Sun Oct 13 17:31:28 2024 00:21:20.389 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:21:20.389 slat (nsec): min=990, max=20300k, avg=135396.44, stdev=949905.81 00:21:20.389 clat (usec): min=2507, max=57721, avg=16850.39, stdev=10042.10 00:21:20.389 lat (usec): min=2527, max=57746, avg=16985.78, stdev=10119.69 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 5145], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[ 8717], 00:21:20.389 | 30.00th=[10552], 40.00th=[12649], 50.00th=[14353], 60.00th=[15664], 00:21:20.389 | 70.00th=[17171], 80.00th=[22676], 90.00th=[32375], 95.00th=[40633], 00:21:20.389 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49021], 99.95th=[52691], 00:21:20.389 | 99.99th=[57934] 00:21:20.389 write: IOPS=4300, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1004msec); 0 zone resets 00:21:20.389 slat (nsec): min=1629, max=11207k, avg=96916.36, stdev=613189.17 00:21:20.389 clat (usec): min=3372, max=43846, avg=13464.52, stdev=8163.66 00:21:20.389 lat (usec): min=3376, max=43849, avg=13561.43, stdev=8212.94 00:21:20.389 clat percentiles (usec): 00:21:20.389 | 1.00th=[ 3490], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7111], 00:21:20.389 | 30.00th=[ 8160], 40.00th=[ 9896], 50.00th=[11076], 60.00th=[12125], 00:21:20.389 | 70.00th=[14615], 80.00th=[17957], 90.00th=[25297], 95.00th=[32900], 00:21:20.389 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:21:20.389 | 99.99th=[43779] 00:21:20.389 bw ( KiB/s): min=12288, max=21240, per=17.25%, avg=16764.00, stdev=6330.02, samples=2 00:21:20.389 iops : min= 3072, max= 5310, avg=4191.00, stdev=1582.50, samples=2 00:21:20.389 lat (msec) : 4=1.13%, 10=33.14%, 20=45.88%, 50=19.82%, 100=0.04% 00:21:20.389 cpu : usr=2.99%, sys=4.89%, ctx=327, majf=0, minf=1 00:21:20.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:20.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.390 issued rwts: total=4096,4318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.390 00:21:20.390 Run status group 0 (all jobs): 00:21:20.390 READ: bw=93.2MiB/s (97.8MB/s), 15.9MiB/s-31.8MiB/s (16.7MB/s-33.4MB/s), io=93.7MiB (98.2MB), run=1004-1005msec 00:21:20.390 WRITE: bw=94.9MiB/s (99.5MB/s), 16.8MiB/s-31.9MiB/s (17.6MB/s-33.4MB/s), io=95.4MiB (100MB), run=1004-1005msec 00:21:20.390 00:21:20.390 Disk stats (read/write): 00:21:20.390 nvme0n1: ios=4631/4608, merge=0/0, ticks=35641/38801, in_queue=74442, util=86.67% 00:21:20.390 nvme0n2: ios=5314/5632, merge=0/0, ticks=27757/28486, in_queue=56243, util=90.72% 00:21:20.390 nvme0n3: ios=6712/7055, merge=0/0, ticks=45050/42422, in_queue=87472, util=92.29% 00:21:20.390 nvme0n4: ios=3132/3549, merge=0/0, ticks=25155/21790, in_queue=46945, util=97.01% 00:21:20.390 17:31:28 -- target/fio.sh@55 -- # sync 00:21:20.390 17:31:28 -- target/fio.sh@59 -- # fio_pid=3220155 00:21:20.390 17:31:28 -- target/fio.sh@61 -- # sleep 3 00:21:20.390 17:31:28 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:20.390 [global] 00:21:20.390 thread=1 00:21:20.390 invalidate=1 00:21:20.390 rw=read 00:21:20.390 time_based=1 00:21:20.390 runtime=10 00:21:20.390 ioengine=libaio 00:21:20.390 direct=1 00:21:20.390 bs=4096 00:21:20.390 iodepth=1 00:21:20.390 norandommap=1 00:21:20.390 numjobs=1 00:21:20.390 00:21:20.390 [job0] 00:21:20.390 filename=/dev/nvme0n1 00:21:20.390 [job1] 00:21:20.390 filename=/dev/nvme0n2 00:21:20.390 [job2] 00:21:20.390 filename=/dev/nvme0n3 00:21:20.390 [job3] 00:21:20.390 filename=/dev/nvme0n4 00:21:20.390 Could not set queue depth (nvme0n1) 00:21:20.390 Could not set queue depth (nvme0n2) 00:21:20.390 Could not set queue depth (nvme0n3) 00:21:20.390 Could not set queue depth (nvme0n4) 00:21:20.651 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:20.651 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:20.651 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:20.651 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:20.651 fio-3.35 00:21:20.651 Starting 4 threads 00:21:23.199 17:31:31 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:23.483 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13979648, buflen=4096 00:21:23.483 fio: pid=3220503, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:23.483 17:31:31 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:23.483 17:31:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:23.483 17:31:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:23.483 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=364544, buflen=4096 00:21:23.483 fio: pid=3220497, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:23.771 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4263936, buflen=4096 00:21:23.771 fio: pid=3220469, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:23.771 17:31:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:23.771 17:31:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:23.771 17:31:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:23.771 17:31:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:23.771 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10358784, buflen=4096 00:21:23.771 fio: pid=3220479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:24.031 00:21:24.031 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220469: Sun Oct 13 17:31:32 2024 00:21:24.031 read: IOPS=356, BW=1423KiB/s (1457kB/s)(4164KiB/2926msec) 00:21:24.031 slat (usec): min=6, max=31983, avg=97.10, stdev=1253.30 00:21:24.031 clat (usec): min=476, max=42101, avg=2686.08, stdev=8250.98 00:21:24.032 lat (usec): min=483, max=42125, avg=2769.81, stdev=8323.12 00:21:24.032 clat percentiles (usec): 00:21:24.032 | 1.00th=[ 570], 5.00th=[ 668], 10.00th=[ 717], 20.00th=[ 807], 00:21:24.032 | 30.00th=[ 873], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 996], 00:21:24.032 | 70.00th=[ 1020], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1237], 00:21:24.032 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:24.032 | 99.99th=[42206] 00:21:24.032 bw ( KiB/s): min= 240, max= 4128, per=14.03%, avg=1276.80, stdev=1615.93, samples=5 00:21:24.032 iops : min= 60, max= 1032, avg=319.20, stdev=403.98, samples=5 00:21:24.032 lat (usec) : 500=0.19%, 750=13.82%, 1000=49.23% 00:21:24.032 lat (msec) : 2=32.25%, 10=0.10%, 50=4.32% 00:21:24.032 cpu : usr=0.27%, sys=1.26%, ctx=1047, majf=0, minf=2 00:21:24.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:24.032 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220479: Sun Oct 13 17:31:32 2024 00:21:24.032 read: IOPS=813, BW=3252KiB/s (3330kB/s)(9.88MiB/3111msec) 00:21:24.032 slat (usec): min=24, max=12428, avg=31.81, stdev=256.79 00:21:24.032 clat (usec): min=344, max=41969, avg=1182.97, stdev=2781.39 00:21:24.032 lat (usec): min=369, max=45113, avg=1214.77, stdev=2813.49 00:21:24.032 clat percentiles (usec): 00:21:24.032 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 938], 00:21:24.032 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:21:24.032 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:21:24.032 | 99.00th=[ 1188], 99.50th=[ 1434], 99.90th=[41681], 99.95th=[41681], 00:21:24.032 | 99.99th=[42206] 00:21:24.032 bw ( KiB/s): min= 672, max= 3968, per=36.86%, avg=3352.00, stdev=1314.56, samples=6 00:21:24.032 iops : min= 168, max= 992, avg=838.00, stdev=328.64, samples=6 00:21:24.032 lat (usec) : 500=0.08%, 750=1.07%, 1000=47.59% 00:21:24.032 lat (msec) : 2=50.75%, 50=0.47% 00:21:24.032 cpu : usr=0.84%, sys=2.48%, ctx=2533, majf=0, minf=1 00:21:24.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 issued rwts: total=2530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:24.032 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220497: Sun Oct 13 17:31:32 2024 00:21:24.032 read: IOPS=32, BW=129KiB/s (132kB/s)(356KiB/2767msec) 00:21:24.032 slat (nsec): min=7202, max=40778, avg=26181.38, stdev=4102.01 00:21:24.032 clat (usec): min=621, max=43927, avg=30810.32, stdev=17972.80 00:21:24.032 lat (usec): min=646, max=43959, avg=30836.50, stdev=17973.44 00:21:24.032 clat percentiles (usec): 00:21:24.032 | 1.00th=[ 619], 5.00th=[ 791], 10.00th=[ 832], 20.00th=[ 1029], 00:21:24.032 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:21:24.032 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:24.032 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:21:24.032 | 99.99th=[43779] 00:21:24.032 bw ( KiB/s): min= 96, max= 200, per=1.45%, avg=132.80, stdev=48.20, samples=5 00:21:24.032 iops : min= 24, max= 50, avg=33.20, stdev=12.05, samples=5 00:21:24.032 lat (usec) : 750=4.44%, 1000=12.22% 00:21:24.032 lat (msec) : 2=7.78%, 10=2.22%, 50=72.22% 00:21:24.032 cpu : usr=0.00%, sys=0.18%, ctx=90, majf=0, minf=2 00:21:24.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:24.032 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220503: Sun Oct 13 17:31:32 2024 00:21:24.032 read: IOPS=1320, BW=5279KiB/s (5406kB/s)(13.3MiB/2586msec) 00:21:24.032 slat (nsec): min=6770, max=82917, avg=25948.96, stdev=6102.60 00:21:24.032 clat (usec): min=134, max=42293, avg=717.66, stdev=2438.49 00:21:24.032 lat (usec): min=151, max=42321, avg=743.61, stdev=2438.77 00:21:24.032 clat percentiles (usec): 00:21:24.032 | 1.00th=[ 225], 5.00th=[ 326], 10.00th=[ 367], 20.00th=[ 453], 00:21:24.032 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 594], 00:21:24.032 | 70.00th=[ 635], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:21:24.032 | 99.00th=[ 857], 99.50th=[ 996], 99.90th=[41681], 99.95th=[42206], 00:21:24.032 | 99.99th=[42206] 00:21:24.032 bw ( KiB/s): min= 1520, max= 6800, per=58.49%, avg=5318.40, stdev=2230.78, samples=5 00:21:24.032 iops : min= 380, max= 1700, avg=1329.60, stdev=557.69, samples=5 00:21:24.032 lat (usec) : 250=2.37%, 500=25.07%, 750=56.06%, 1000=15.99% 00:21:24.032 lat (msec) : 2=0.12%, 50=0.35% 00:21:24.032 cpu : usr=1.55%, sys=3.83%, ctx=3418, majf=0, minf=1 00:21:24.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.032 issued rwts: total=3414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:24.032 00:21:24.032 Run status group 0 (all jobs): 00:21:24.032 READ: bw=9093KiB/s (9311kB/s), 129KiB/s-5279KiB/s (132kB/s-5406kB/s), io=27.6MiB (29.0MB), run=2586-3111msec 00:21:24.032 00:21:24.032 Disk stats (read/write): 00:21:24.032 nvme0n1: ios=1003/0, merge=0/0, ticks=2727/0, in_queue=2727, util=93.29% 00:21:24.032 nvme0n2: ios=2528/0, merge=0/0, ticks=2910/0, in_queue=2910, util=95.20% 00:21:24.032 nvme0n3: ios=85/0, merge=0/0, ticks=2577/0, in_queue=2577, util=96.03% 00:21:24.032 nvme0n4: ios=3099/0, merge=0/0, ticks=3109/0, in_queue=3109, util=98.56% 00:21:24.032 17:31:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:24.032 17:31:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:24.292 17:31:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:24.292 17:31:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:24.292 17:31:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:24.292 17:31:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:24.552 17:31:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:24.552 17:31:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:24.812 17:31:33 -- target/fio.sh@69 -- # fio_status=0 00:21:24.812 17:31:33 -- target/fio.sh@70 -- # wait 3220155 00:21:24.812 17:31:33 -- target/fio.sh@70 -- # fio_status=4 00:21:24.812 17:31:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:24.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:24.812 17:31:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:24.812 17:31:33 -- common/autotest_common.sh@1198 -- # local i=0 00:21:24.812 17:31:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:24.812 17:31:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:24.812 17:31:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:24.812 17:31:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:24.813 17:31:33 -- common/autotest_common.sh@1210 -- # return 0 00:21:24.813 17:31:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:24.813 17:31:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:24.813 nvmf hotplug test: fio failed as expected 00:21:24.813 17:31:33 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.072 17:31:33 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:25.072 17:31:33 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:25.072 17:31:33 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:25.073 17:31:33 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:25.073 17:31:33 -- target/fio.sh@91 -- # nvmftestfini 00:21:25.073 17:31:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:25.073 17:31:33 -- nvmf/common.sh@116 -- # sync 00:21:25.073 17:31:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:25.073 17:31:33 -- nvmf/common.sh@119 -- # set +e 00:21:25.073 17:31:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:25.073 17:31:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:25.073 rmmod nvme_tcp 00:21:25.073 rmmod nvme_fabrics 00:21:25.073 rmmod nvme_keyring 00:21:25.073 17:31:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:25.073 17:31:33 -- nvmf/common.sh@123 -- # set -e 00:21:25.073 17:31:33 -- nvmf/common.sh@124 -- # return 0 00:21:25.073 17:31:33 -- nvmf/common.sh@477 -- # '[' -n 3216648 ']' 00:21:25.073 17:31:33 -- nvmf/common.sh@478 -- # killprocess 3216648 00:21:25.073 17:31:33 -- common/autotest_common.sh@926 -- # '[' -z 3216648 ']' 00:21:25.073 17:31:33 -- common/autotest_common.sh@930 -- # kill -0 3216648 00:21:25.073 17:31:33 -- common/autotest_common.sh@931 -- # uname 00:21:25.073 17:31:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:25.073 17:31:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3216648 00:21:25.073 17:31:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:25.073 17:31:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:25.073 17:31:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3216648' 00:21:25.073 killing process with pid 3216648 00:21:25.073 17:31:33 -- common/autotest_common.sh@945 -- # kill 3216648 00:21:25.073 17:31:33 -- common/autotest_common.sh@950 -- # wait 3216648 00:21:25.333 17:31:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:25.333 17:31:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:25.333 17:31:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:25.333 17:31:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.333 17:31:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:25.333 17:31:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.333 17:31:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.333 17:31:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.243 17:31:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:27.243 00:21:27.243 real 0m28.590s 00:21:27.243 user 2m35.120s 00:21:27.243 sys 0m9.263s 00:21:27.243 17:31:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.243 17:31:35 -- common/autotest_common.sh@10 -- # set +x 00:21:27.243 ************************************ 00:21:27.243 END TEST nvmf_fio_target 00:21:27.243 ************************************ 00:21:27.504 17:31:35 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:27.504 17:31:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:27.504 17:31:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.504 17:31:35 -- common/autotest_common.sh@10 -- # set +x 00:21:27.504 ************************************ 00:21:27.504 START TEST nvmf_bdevio 00:21:27.504 ************************************ 00:21:27.504 17:31:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:27.504 * Looking for test storage... 00:21:27.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.504 17:31:35 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.504 17:31:35 -- nvmf/common.sh@7 -- # uname -s 00:21:27.504 17:31:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.504 17:31:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.504 17:31:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.504 17:31:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.504 17:31:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.504 17:31:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.504 17:31:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.504 17:31:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.504 17:31:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.504 17:31:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.504 17:31:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.504 17:31:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.504 17:31:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.504 17:31:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.504 17:31:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.504 17:31:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.504 17:31:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.504 17:31:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.504 17:31:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.504 17:31:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.504 17:31:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.505 17:31:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.505 17:31:35 -- paths/export.sh@5 -- # export PATH 00:21:27.505 17:31:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.505 17:31:35 -- nvmf/common.sh@46 -- # : 0 00:21:27.505 17:31:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:27.505 17:31:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:27.505 17:31:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:27.505 17:31:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.505 17:31:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.505 17:31:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:27.505 17:31:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:27.505 17:31:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:27.505 17:31:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.505 17:31:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:27.505 17:31:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:27.505 17:31:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:27.505 17:31:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.505 17:31:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:27.505 17:31:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:27.505 17:31:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:27.505 17:31:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.505 17:31:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.505 17:31:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.505 17:31:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:27.505 17:31:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:27.505 17:31:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:27.505 17:31:35 -- common/autotest_common.sh@10 -- # set +x 00:21:35.642 17:31:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:35.642 17:31:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:35.642 17:31:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:35.642 17:31:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:35.642 17:31:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:35.642 17:31:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:35.642 17:31:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:35.642 17:31:43 -- nvmf/common.sh@294 -- # net_devs=() 00:21:35.642 17:31:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:35.642 17:31:43 -- nvmf/common.sh@295 -- # e810=() 00:21:35.642 17:31:43 -- nvmf/common.sh@295 -- # local -ga e810 00:21:35.642 17:31:43 -- nvmf/common.sh@296 -- # x722=() 00:21:35.642 17:31:43 -- nvmf/common.sh@296 -- # local -ga x722 00:21:35.642 17:31:43 -- nvmf/common.sh@297 -- # mlx=() 00:21:35.642 17:31:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:35.642 17:31:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.642 17:31:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:35.642 17:31:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:35.642 17:31:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:35.642 17:31:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:35.642 17:31:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:35.642 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:35.642 17:31:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:35.642 17:31:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:35.642 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:35.642 17:31:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:35.642 17:31:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:35.642 17:31:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.642 17:31:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:35.642 17:31:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.642 17:31:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:35.642 Found net devices under 0000:31:00.0: cvl_0_0 00:21:35.642 17:31:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.642 17:31:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:35.642 17:31:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.642 17:31:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:35.642 17:31:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.642 17:31:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:35.642 Found net devices under 0000:31:00.1: cvl_0_1 00:21:35.642 17:31:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.642 17:31:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:35.642 17:31:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:35.642 17:31:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:35.642 17:31:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.642 17:31:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.642 17:31:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.642 17:31:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:35.642 17:31:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.642 17:31:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.642 17:31:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:35.642 17:31:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.642 17:31:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.642 17:31:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:35.642 17:31:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:35.642 17:31:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.642 17:31:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.642 17:31:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.642 17:31:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.642 17:31:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:35.642 17:31:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.642 17:31:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.642 17:31:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.642 17:31:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:35.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:21:35.642 00:21:35.642 --- 10.0.0.2 ping statistics --- 00:21:35.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.642 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:21:35.642 17:31:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:21:35.642 00:21:35.642 --- 10.0.0.1 ping statistics --- 00:21:35.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.642 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:21:35.642 17:31:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.642 17:31:43 -- nvmf/common.sh@410 -- # return 0 00:21:35.642 17:31:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:35.642 17:31:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.642 17:31:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:35.642 17:31:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.642 17:31:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:35.642 17:31:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:35.642 17:31:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:35.642 17:31:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:35.642 17:31:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:35.642 17:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:35.642 17:31:43 -- nvmf/common.sh@469 -- # nvmfpid=3225695 00:21:35.642 17:31:43 -- nvmf/common.sh@470 -- # waitforlisten 3225695 00:21:35.642 17:31:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:35.642 17:31:43 -- common/autotest_common.sh@819 -- # '[' -z 3225695 ']' 00:21:35.642 17:31:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.642 17:31:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:35.642 17:31:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.642 17:31:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:35.642 17:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:35.642 [2024-10-13 17:31:43.516839] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:35.642 [2024-10-13 17:31:43.516901] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.642 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.642 [2024-10-13 17:31:43.608480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.642 [2024-10-13 17:31:43.655980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:35.642 [2024-10-13 17:31:43.656140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.642 [2024-10-13 17:31:43.656149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.642 [2024-10-13 17:31:43.656158] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.642 [2024-10-13 17:31:43.656320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.642 [2024-10-13 17:31:43.656481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:35.642 [2024-10-13 17:31:43.656648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.642 [2024-10-13 17:31:43.656648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:35.903 17:31:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:35.903 17:31:44 -- common/autotest_common.sh@852 -- # return 0 00:21:35.903 17:31:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:35.903 17:31:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:35.903 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:35.903 17:31:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.903 17:31:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.903 17:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.903 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:35.903 [2024-10-13 17:31:44.378794] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.903 17:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.903 17:31:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:35.903 17:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.903 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:35.903 Malloc0 00:21:35.903 17:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.903 17:31:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.903 17:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.903 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:35.903 17:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.903 17:31:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.903 17:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.903 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:36.164 17:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.164 17:31:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.164 17:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.164 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:36.164 [2024-10-13 17:31:44.443638] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.164 17:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.164 17:31:44 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:36.164 17:31:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:36.164 17:31:44 -- nvmf/common.sh@520 -- # config=() 00:21:36.164 17:31:44 -- nvmf/common.sh@520 -- # local subsystem config 00:21:36.164 17:31:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:36.164 17:31:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:36.164 { 00:21:36.164 "params": { 00:21:36.164 "name": "Nvme$subsystem", 00:21:36.164 "trtype": "$TEST_TRANSPORT", 00:21:36.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.164 "adrfam": "ipv4", 00:21:36.164 "trsvcid": "$NVMF_PORT", 00:21:36.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.164 "hdgst": ${hdgst:-false}, 00:21:36.164 "ddgst": ${ddgst:-false} 00:21:36.164 }, 00:21:36.164 "method": "bdev_nvme_attach_controller" 00:21:36.164 } 00:21:36.164 EOF 00:21:36.164 )") 00:21:36.164 17:31:44 -- nvmf/common.sh@542 -- # cat 00:21:36.164 17:31:44 -- nvmf/common.sh@544 -- # jq . 00:21:36.164 17:31:44 -- nvmf/common.sh@545 -- # IFS=, 00:21:36.164 17:31:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:36.164 "params": { 00:21:36.164 "name": "Nvme1", 00:21:36.164 "trtype": "tcp", 00:21:36.164 "traddr": "10.0.0.2", 00:21:36.164 "adrfam": "ipv4", 00:21:36.164 "trsvcid": "4420", 00:21:36.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.164 "hdgst": false, 00:21:36.164 "ddgst": false 00:21:36.164 }, 00:21:36.164 "method": "bdev_nvme_attach_controller" 00:21:36.164 }' 00:21:36.164 [2024-10-13 17:31:44.497712] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:36.164 [2024-10-13 17:31:44.497799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225815 ] 00:21:36.164 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.164 [2024-10-13 17:31:44.574052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:36.164 [2024-10-13 17:31:44.611919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.164 [2024-10-13 17:31:44.612041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.164 [2024-10-13 17:31:44.612043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.424 [2024-10-13 17:31:44.741628] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:36.425 [2024-10-13 17:31:44.741660] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:36.425 I/O targets: 00:21:36.425 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:36.425 00:21:36.425 00:21:36.425 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.425 http://cunit.sourceforge.net/ 00:21:36.425 00:21:36.425 00:21:36.425 Suite: bdevio tests on: Nvme1n1 00:21:36.425 Test: blockdev write read block ...passed 00:21:36.425 Test: blockdev write zeroes read block ...passed 00:21:36.425 Test: blockdev write zeroes read no split ...passed 00:21:36.425 Test: blockdev write zeroes read split ...passed 00:21:36.425 Test: blockdev write zeroes read split partial ...passed 00:21:36.425 Test: blockdev reset ...[2024-10-13 17:31:44.902673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.425 [2024-10-13 17:31:44.902722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x749f30 (9): Bad file descriptor 00:21:36.425 [2024-10-13 17:31:44.923033] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.425 passed 00:21:36.684 Test: blockdev write read 8 blocks ...passed 00:21:36.684 Test: blockdev write read size > 128k ...passed 00:21:36.684 Test: blockdev write read invalid size ...passed 00:21:36.685 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:36.685 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:36.685 Test: blockdev write read max offset ...passed 00:21:36.685 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:36.685 Test: blockdev writev readv 8 blocks ...passed 00:21:36.685 Test: blockdev writev readv 30 x 1block ...passed 00:21:36.945 Test: blockdev writev readv block ...passed 00:21:36.945 Test: blockdev writev readv size > 128k ...passed 00:21:36.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:36.945 Test: blockdev comparev and writev ...[2024-10-13 17:31:45.223987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.224018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.224029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.224036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.224409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.224419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.224428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.224434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.224799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.224809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.224819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.224826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.225203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.225212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.225221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.945 [2024-10-13 17:31:45.225226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:36.945 passed 00:21:36.945 Test: blockdev nvme passthru rw ...passed 00:21:36.945 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:31:45.309508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.945 [2024-10-13 17:31:45.309520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.309712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.945 [2024-10-13 17:31:45.309720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.309957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.945 [2024-10-13 17:31:45.309966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:36.945 [2024-10-13 17:31:45.310178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.945 [2024-10-13 17:31:45.310187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.945 passed 00:21:36.945 Test: blockdev nvme admin passthru ...passed 00:21:36.945 Test: blockdev copy ...passed 00:21:36.945 00:21:36.945 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.945 suites 1 1 n/a 0 0 00:21:36.945 tests 23 23 23 0 0 00:21:36.945 asserts 152 152 152 0 n/a 00:21:36.945 00:21:36.945 Elapsed time = 1.255 seconds 00:21:36.945 17:31:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.945 17:31:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.945 17:31:45 -- common/autotest_common.sh@10 -- # set +x 00:21:37.206 17:31:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.206 17:31:45 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:37.206 17:31:45 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:37.206 17:31:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.206 17:31:45 -- nvmf/common.sh@116 -- # sync 00:21:37.206 17:31:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.206 17:31:45 -- nvmf/common.sh@119 -- # set +e 00:21:37.206 17:31:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.206 17:31:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.206 rmmod nvme_tcp 00:21:37.206 rmmod nvme_fabrics 00:21:37.206 rmmod nvme_keyring 00:21:37.206 17:31:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.206 17:31:45 -- nvmf/common.sh@123 -- # set -e 00:21:37.206 17:31:45 -- nvmf/common.sh@124 -- # return 0 00:21:37.206 17:31:45 -- nvmf/common.sh@477 -- # '[' -n 3225695 ']' 00:21:37.206 17:31:45 -- nvmf/common.sh@478 -- # killprocess 3225695 00:21:37.206 17:31:45 -- common/autotest_common.sh@926 -- # '[' -z 3225695 ']' 00:21:37.206 17:31:45 -- common/autotest_common.sh@930 -- # kill -0 3225695 00:21:37.206 17:31:45 -- common/autotest_common.sh@931 -- # uname 00:21:37.206 17:31:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.206 17:31:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3225695 00:21:37.206 17:31:45 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:37.206 17:31:45 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:37.206 17:31:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3225695' 00:21:37.206 killing process with pid 3225695 00:21:37.206 17:31:45 -- common/autotest_common.sh@945 -- # kill 3225695 00:21:37.206 17:31:45 -- common/autotest_common.sh@950 -- # wait 3225695 00:21:37.466 17:31:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.466 17:31:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.466 17:31:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.466 17:31:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.466 17:31:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.466 17:31:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.466 17:31:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.466 17:31:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.422 17:31:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:39.422 00:21:39.422 real 0m12.031s 00:21:39.422 user 0m12.404s 00:21:39.422 sys 0m6.196s 00:21:39.422 17:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.422 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:21:39.422 ************************************ 00:21:39.422 END TEST nvmf_bdevio 00:21:39.422 ************************************ 00:21:39.422 17:31:47 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:39.422 17:31:47 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:39.422 17:31:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:39.422 17:31:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.422 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:21:39.422 ************************************ 00:21:39.422 START TEST nvmf_bdevio_no_huge 00:21:39.422 ************************************ 00:21:39.422 17:31:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:39.682 * Looking for test storage... 00:21:39.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.682 17:31:47 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.682 17:31:47 -- nvmf/common.sh@7 -- # uname -s 00:21:39.682 17:31:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.683 17:31:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.683 17:31:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.683 17:31:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.683 17:31:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.683 17:31:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.683 17:31:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.683 17:31:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.683 17:31:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.683 17:31:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.683 17:31:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.683 17:31:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.683 17:31:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.683 17:31:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.683 17:31:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.683 17:31:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.683 17:31:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.683 17:31:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.683 17:31:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.683 17:31:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.683 17:31:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.683 17:31:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.683 17:31:47 -- paths/export.sh@5 -- # export PATH 00:21:39.683 17:31:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.683 17:31:47 -- nvmf/common.sh@46 -- # : 0 00:21:39.683 17:31:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:39.683 17:31:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:39.683 17:31:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:39.683 17:31:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.683 17:31:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.683 17:31:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:39.683 17:31:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:39.683 17:31:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:39.683 17:31:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.683 17:31:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.683 17:31:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:39.683 17:31:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:39.683 17:31:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.683 17:31:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:39.683 17:31:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:39.683 17:31:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:39.683 17:31:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.683 17:31:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.683 17:31:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.683 17:31:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:39.683 17:31:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:39.683 17:31:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:39.683 17:31:48 -- common/autotest_common.sh@10 -- # set +x 00:21:46.270 17:31:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:46.270 17:31:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:46.270 17:31:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:46.270 17:31:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:46.270 17:31:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:46.270 17:31:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:46.270 17:31:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:46.270 17:31:54 -- nvmf/common.sh@294 -- # net_devs=() 00:21:46.270 17:31:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:46.270 17:31:54 -- nvmf/common.sh@295 -- # e810=() 00:21:46.270 17:31:54 -- nvmf/common.sh@295 -- # local -ga e810 00:21:46.270 17:31:54 -- nvmf/common.sh@296 -- # x722=() 00:21:46.270 17:31:54 -- nvmf/common.sh@296 -- # local -ga x722 00:21:46.270 17:31:54 -- nvmf/common.sh@297 -- # mlx=() 00:21:46.270 17:31:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:46.270 17:31:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.270 17:31:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:46.270 17:31:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:46.270 17:31:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:46.270 17:31:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:46.270 17:31:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:46.270 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:46.270 17:31:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:46.270 17:31:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:46.270 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:46.270 17:31:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:46.270 17:31:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:46.270 17:31:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.270 17:31:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:46.270 17:31:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.270 17:31:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:46.270 Found net devices under 0000:31:00.0: cvl_0_0 00:21:46.270 17:31:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.270 17:31:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:46.270 17:31:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.270 17:31:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:46.270 17:31:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.270 17:31:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:46.270 Found net devices under 0000:31:00.1: cvl_0_1 00:21:46.270 17:31:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.270 17:31:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:46.270 17:31:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:46.270 17:31:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:46.270 17:31:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:46.270 17:31:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.270 17:31:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.270 17:31:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.270 17:31:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:46.270 17:31:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.270 17:31:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.270 17:31:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:46.270 17:31:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.270 17:31:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.270 17:31:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:46.270 17:31:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:46.270 17:31:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.270 17:31:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.531 17:31:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.531 17:31:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.531 17:31:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:46.531 17:31:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.531 17:31:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.531 17:31:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.531 17:31:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:46.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:21:46.531 00:21:46.531 --- 10.0.0.2 ping statistics --- 00:21:46.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.531 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:21:46.531 17:31:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:21:46.531 00:21:46.531 --- 10.0.0.1 ping statistics --- 00:21:46.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.531 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:21:46.531 17:31:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.531 17:31:54 -- nvmf/common.sh@410 -- # return 0 00:21:46.531 17:31:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:46.531 17:31:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.531 17:31:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:46.531 17:31:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:46.531 17:31:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.531 17:31:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:46.531 17:31:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:46.531 17:31:55 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:46.531 17:31:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:46.531 17:31:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:46.531 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:46.531 17:31:55 -- nvmf/common.sh@469 -- # nvmfpid=3230219 00:21:46.531 17:31:55 -- nvmf/common.sh@470 -- # waitforlisten 3230219 00:21:46.531 17:31:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:46.531 17:31:55 -- common/autotest_common.sh@819 -- # '[' -z 3230219 ']' 00:21:46.531 17:31:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.531 17:31:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.531 17:31:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.531 17:31:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.531 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:46.791 [2024-10-13 17:31:55.069085] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:46.791 [2024-10-13 17:31:55.069141] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:46.791 [2024-10-13 17:31:55.157390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.791 [2024-10-13 17:31:55.235327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.791 [2024-10-13 17:31:55.235466] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.791 [2024-10-13 17:31:55.235475] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.791 [2024-10-13 17:31:55.235484] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.791 [2024-10-13 17:31:55.235650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:46.791 [2024-10-13 17:31:55.235810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:46.791 [2024-10-13 17:31:55.235971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.791 [2024-10-13 17:31:55.235972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:47.363 17:31:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.363 17:31:55 -- common/autotest_common.sh@852 -- # return 0 00:21:47.363 17:31:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:47.363 17:31:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.363 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.624 17:31:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.624 17:31:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.624 17:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.624 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.624 [2024-10-13 17:31:55.917617] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.624 17:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.624 17:31:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.624 17:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.624 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.624 Malloc0 00:21:47.624 17:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.624 17:31:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.624 17:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.624 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.624 17:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.624 17:31:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.624 17:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.624 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.624 17:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.624 17:31:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.624 17:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.624 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.624 [2024-10-13 17:31:55.971424] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.624 17:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.624 17:31:55 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:47.624 17:31:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:47.624 17:31:55 -- nvmf/common.sh@520 -- # config=() 00:21:47.624 17:31:55 -- nvmf/common.sh@520 -- # local subsystem config 00:21:47.624 17:31:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:47.624 17:31:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:47.624 { 00:21:47.624 "params": { 00:21:47.624 "name": "Nvme$subsystem", 00:21:47.624 "trtype": "$TEST_TRANSPORT", 00:21:47.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.624 "adrfam": "ipv4", 00:21:47.624 "trsvcid": "$NVMF_PORT", 00:21:47.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.624 "hdgst": ${hdgst:-false}, 00:21:47.624 "ddgst": ${ddgst:-false} 00:21:47.624 }, 00:21:47.624 "method": "bdev_nvme_attach_controller" 00:21:47.624 } 00:21:47.624 EOF 00:21:47.624 )") 00:21:47.624 17:31:55 -- nvmf/common.sh@542 -- # cat 00:21:47.624 17:31:55 -- nvmf/common.sh@544 -- # jq . 00:21:47.624 17:31:55 -- nvmf/common.sh@545 -- # IFS=, 00:21:47.624 17:31:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:47.624 "params": { 00:21:47.624 "name": "Nvme1", 00:21:47.624 "trtype": "tcp", 00:21:47.624 "traddr": "10.0.0.2", 00:21:47.624 "adrfam": "ipv4", 00:21:47.624 "trsvcid": "4420", 00:21:47.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.624 "hdgst": false, 00:21:47.624 "ddgst": false 00:21:47.624 }, 00:21:47.624 "method": "bdev_nvme_attach_controller" 00:21:47.624 }' 00:21:47.624 [2024-10-13 17:31:56.023050] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:47.624 [2024-10-13 17:31:56.023121] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3230480 ] 00:21:47.624 [2024-10-13 17:31:56.089709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.885 [2024-10-13 17:31:56.159754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.885 [2024-10-13 17:31:56.159875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.885 [2024-10-13 17:31:56.159878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.885 [2024-10-13 17:31:56.380252] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:47.885 [2024-10-13 17:31:56.380280] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:47.885 I/O targets: 00:21:47.885 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:47.885 00:21:47.885 00:21:47.885 CUnit - A unit testing framework for C - Version 2.1-3 00:21:47.885 http://cunit.sourceforge.net/ 00:21:47.885 00:21:47.885 00:21:47.885 Suite: bdevio tests on: Nvme1n1 00:21:48.145 Test: blockdev write read block ...passed 00:21:48.145 Test: blockdev write zeroes read block ...passed 00:21:48.145 Test: blockdev write zeroes read no split ...passed 00:21:48.145 Test: blockdev write zeroes read split ...passed 00:21:48.145 Test: blockdev write zeroes read split partial ...passed 00:21:48.145 Test: blockdev reset ...[2024-10-13 17:31:56.601361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.145 [2024-10-13 17:31:56.601434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d1a80 (9): Bad file descriptor 00:21:48.145 [2024-10-13 17:31:56.660818] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:48.145 passed 00:21:48.405 Test: blockdev write read 8 blocks ...passed 00:21:48.406 Test: blockdev write read size > 128k ...passed 00:21:48.406 Test: blockdev write read invalid size ...passed 00:21:48.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:48.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:48.406 Test: blockdev write read max offset ...passed 00:21:48.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:48.406 Test: blockdev writev readv 8 blocks ...passed 00:21:48.406 Test: blockdev writev readv 30 x 1block ...passed 00:21:48.666 Test: blockdev writev readv block ...passed 00:21:48.666 Test: blockdev writev readv size > 128k ...passed 00:21:48.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:48.666 Test: blockdev comparev and writev ...[2024-10-13 17:31:56.964482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.964509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.964520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.964526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.964994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.965003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.965013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.965018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.965493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.965501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.965511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.965516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.965975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.965985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:56.965994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.666 [2024-10-13 17:31:56.966000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:48.666 passed 00:21:48.666 Test: blockdev nvme passthru rw ...passed 00:21:48.666 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:31:57.050905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.666 [2024-10-13 17:31:57.050916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:57.051230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.666 [2024-10-13 17:31:57.051243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:57.051546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.666 [2024-10-13 17:31:57.051554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:48.666 [2024-10-13 17:31:57.051865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.666 [2024-10-13 17:31:57.051874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:48.666 passed 00:21:48.666 Test: blockdev nvme admin passthru ...passed 00:21:48.666 Test: blockdev copy ...passed 00:21:48.666 00:21:48.666 Run Summary: Type Total Ran Passed Failed Inactive 00:21:48.666 suites 1 1 n/a 0 0 00:21:48.666 tests 23 23 23 0 0 00:21:48.666 asserts 152 152 152 0 n/a 00:21:48.666 00:21:48.666 Elapsed time = 1.460 seconds 00:21:48.927 17:31:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.927 17:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.927 17:31:57 -- common/autotest_common.sh@10 -- # set +x 00:21:48.927 17:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.927 17:31:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:48.927 17:31:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:48.927 17:31:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:48.927 17:31:57 -- nvmf/common.sh@116 -- # sync 00:21:48.927 17:31:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:48.927 17:31:57 -- nvmf/common.sh@119 -- # set +e 00:21:48.927 17:31:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:48.927 17:31:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:48.927 rmmod nvme_tcp 00:21:48.927 rmmod nvme_fabrics 00:21:48.927 rmmod nvme_keyring 00:21:48.927 17:31:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:48.927 17:31:57 -- nvmf/common.sh@123 -- # set -e 00:21:48.927 17:31:57 -- nvmf/common.sh@124 -- # return 0 00:21:48.927 17:31:57 -- nvmf/common.sh@477 -- # '[' -n 3230219 ']' 00:21:48.927 17:31:57 -- nvmf/common.sh@478 -- # killprocess 3230219 00:21:48.927 17:31:57 -- common/autotest_common.sh@926 -- # '[' -z 3230219 ']' 00:21:48.927 17:31:57 -- common/autotest_common.sh@930 -- # kill -0 3230219 00:21:48.927 17:31:57 -- common/autotest_common.sh@931 -- # uname 00:21:48.927 17:31:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.927 17:31:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3230219 00:21:49.188 17:31:57 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:49.188 17:31:57 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:49.188 17:31:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3230219' 00:21:49.188 killing process with pid 3230219 00:21:49.188 17:31:57 -- common/autotest_common.sh@945 -- # kill 3230219 00:21:49.188 17:31:57 -- common/autotest_common.sh@950 -- # wait 3230219 00:21:49.448 17:31:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:49.448 17:31:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:49.448 17:31:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:49.448 17:31:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.448 17:31:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:49.448 17:31:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.448 17:31:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.448 17:31:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.991 17:31:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:51.991 00:21:51.991 real 0m12.068s 00:21:51.991 user 0m14.273s 00:21:51.991 sys 0m6.425s 00:21:51.991 17:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.991 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:21:51.991 ************************************ 00:21:51.991 END TEST nvmf_bdevio_no_huge 00:21:51.991 ************************************ 00:21:51.991 17:31:59 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:51.991 17:31:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:51.991 17:31:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:51.991 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:21:51.991 ************************************ 00:21:51.991 START TEST nvmf_tls 00:21:51.991 ************************************ 00:21:51.991 17:31:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:51.991 * Looking for test storage... 00:21:51.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.991 17:32:00 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.991 17:32:00 -- nvmf/common.sh@7 -- # uname -s 00:21:51.991 17:32:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.991 17:32:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.991 17:32:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.991 17:32:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.991 17:32:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.991 17:32:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.991 17:32:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.991 17:32:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.991 17:32:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.991 17:32:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.991 17:32:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.991 17:32:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.991 17:32:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.991 17:32:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.991 17:32:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.991 17:32:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.991 17:32:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.991 17:32:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.991 17:32:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.991 17:32:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.991 17:32:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.991 17:32:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.992 17:32:00 -- paths/export.sh@5 -- # export PATH 00:21:51.992 17:32:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.992 17:32:00 -- nvmf/common.sh@46 -- # : 0 00:21:51.992 17:32:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:51.992 17:32:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:51.992 17:32:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:51.992 17:32:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.992 17:32:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.992 17:32:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:51.992 17:32:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:51.992 17:32:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:51.992 17:32:00 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:51.992 17:32:00 -- target/tls.sh@71 -- # nvmftestinit 00:21:51.992 17:32:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:51.992 17:32:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.992 17:32:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:51.992 17:32:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:51.992 17:32:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:51.992 17:32:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.992 17:32:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.992 17:32:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.992 17:32:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:51.992 17:32:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:51.992 17:32:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:51.992 17:32:00 -- common/autotest_common.sh@10 -- # set +x 00:22:00.128 17:32:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:00.128 17:32:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:00.128 17:32:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:00.128 17:32:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:00.128 17:32:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:00.128 17:32:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:00.128 17:32:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:00.128 17:32:07 -- nvmf/common.sh@294 -- # net_devs=() 00:22:00.128 17:32:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:00.128 17:32:07 -- nvmf/common.sh@295 -- # e810=() 00:22:00.128 17:32:07 -- nvmf/common.sh@295 -- # local -ga e810 00:22:00.128 17:32:07 -- nvmf/common.sh@296 -- # x722=() 00:22:00.128 17:32:07 -- nvmf/common.sh@296 -- # local -ga x722 00:22:00.128 17:32:07 -- nvmf/common.sh@297 -- # mlx=() 00:22:00.128 17:32:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:00.128 17:32:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.128 17:32:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:00.128 17:32:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:00.128 17:32:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:00.128 17:32:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:00.128 17:32:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:00.128 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:00.128 17:32:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:00.128 17:32:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:00.128 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:00.128 17:32:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:00.128 17:32:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:00.128 17:32:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:00.128 17:32:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.129 17:32:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:00.129 17:32:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.129 17:32:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:00.129 Found net devices under 0000:31:00.0: cvl_0_0 00:22:00.129 17:32:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.129 17:32:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:00.129 17:32:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.129 17:32:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:00.129 17:32:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.129 17:32:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:00.129 Found net devices under 0000:31:00.1: cvl_0_1 00:22:00.129 17:32:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.129 17:32:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:00.129 17:32:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:00.129 17:32:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:00.129 17:32:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:00.129 17:32:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:00.129 17:32:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.129 17:32:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.129 17:32:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.129 17:32:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:00.129 17:32:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.129 17:32:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.129 17:32:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:00.129 17:32:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.129 17:32:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.129 17:32:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:00.129 17:32:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:00.129 17:32:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.129 17:32:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.129 17:32:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.129 17:32:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.129 17:32:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:00.129 17:32:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.129 17:32:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.129 17:32:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.129 17:32:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:00.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:22:00.129 00:22:00.129 --- 10.0.0.2 ping statistics --- 00:22:00.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.129 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:22:00.129 17:32:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:22:00.129 00:22:00.129 --- 10.0.0.1 ping statistics --- 00:22:00.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.129 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:00.129 17:32:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.129 17:32:07 -- nvmf/common.sh@410 -- # return 0 00:22:00.129 17:32:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:00.129 17:32:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.129 17:32:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:00.129 17:32:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:00.129 17:32:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.129 17:32:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:00.129 17:32:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:00.129 17:32:07 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:00.129 17:32:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.129 17:32:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:00.129 17:32:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.129 17:32:07 -- nvmf/common.sh@469 -- # nvmfpid=3234994 00:22:00.129 17:32:07 -- nvmf/common.sh@470 -- # waitforlisten 3234994 00:22:00.129 17:32:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:00.129 17:32:07 -- common/autotest_common.sh@819 -- # '[' -z 3234994 ']' 00:22:00.129 17:32:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.129 17:32:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.129 17:32:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.129 17:32:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.129 17:32:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.129 [2024-10-13 17:32:07.555594] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:00.129 [2024-10-13 17:32:07.555672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.129 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.129 [2024-10-13 17:32:07.651446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.129 [2024-10-13 17:32:07.696251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.129 [2024-10-13 17:32:07.696402] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.129 [2024-10-13 17:32:07.696413] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.129 [2024-10-13 17:32:07.696421] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.129 [2024-10-13 17:32:07.696443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.129 17:32:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.129 17:32:08 -- common/autotest_common.sh@852 -- # return 0 00:22:00.129 17:32:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.129 17:32:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:00.129 17:32:08 -- common/autotest_common.sh@10 -- # set +x 00:22:00.129 17:32:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.129 17:32:08 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:22:00.129 17:32:08 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:00.129 true 00:22:00.129 17:32:08 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.129 17:32:08 -- target/tls.sh@82 -- # jq -r .tls_version 00:22:00.389 17:32:08 -- target/tls.sh@82 -- # version=0 00:22:00.389 17:32:08 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:22:00.389 17:32:08 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.389 17:32:08 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.389 17:32:08 -- target/tls.sh@90 -- # jq -r .tls_version 00:22:00.650 17:32:09 -- target/tls.sh@90 -- # version=13 00:22:00.650 17:32:09 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:22:00.650 17:32:09 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:00.910 17:32:09 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.910 17:32:09 -- target/tls.sh@98 -- # jq -r .tls_version 00:22:00.910 17:32:09 -- target/tls.sh@98 -- # version=7 00:22:00.910 17:32:09 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:22:00.910 17:32:09 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:00.910 17:32:09 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.171 17:32:09 -- target/tls.sh@105 -- # ktls=false 00:22:01.171 17:32:09 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:22:01.171 17:32:09 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:01.432 17:32:09 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.432 17:32:09 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:01.432 17:32:09 -- target/tls.sh@113 -- # ktls=true 00:22:01.432 17:32:09 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:22:01.432 17:32:09 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:01.693 17:32:10 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.693 17:32:10 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:22:01.954 17:32:10 -- target/tls.sh@121 -- # ktls=false 00:22:01.954 17:32:10 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:22:01.954 17:32:10 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:22:01.954 17:32:10 -- target/tls.sh@49 -- # local key hash crc 00:22:01.954 17:32:10 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:22:01.954 17:32:10 -- target/tls.sh@51 -- # hash=01 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # gzip -1 -c 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # tail -c8 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # head -c 4 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # crc='p$H�' 00:22:01.954 17:32:10 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:01.954 17:32:10 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:22:01.954 17:32:10 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:01.954 17:32:10 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:01.954 17:32:10 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:22:01.954 17:32:10 -- target/tls.sh@49 -- # local key hash crc 00:22:01.954 17:32:10 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:22:01.954 17:32:10 -- target/tls.sh@51 -- # hash=01 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # gzip -1 -c 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # tail -c8 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # head -c 4 00:22:01.954 17:32:10 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:22:01.955 17:32:10 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:01.955 17:32:10 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:22:01.955 17:32:10 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:01.955 17:32:10 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:01.955 17:32:10 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:01.955 17:32:10 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:01.955 17:32:10 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:01.955 17:32:10 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:01.955 17:32:10 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:01.955 17:32:10 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:01.955 17:32:10 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.215 17:32:10 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:02.215 17:32:10 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.215 17:32:10 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.215 17:32:10 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.475 [2024-10-13 17:32:10.882622] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.475 17:32:10 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.736 17:32:11 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:02.736 [2024-10-13 17:32:11.175339] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.736 [2024-10-13 17:32:11.175517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.736 17:32:11 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:02.996 malloc0 00:22:02.996 17:32:11 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.996 17:32:11 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:03.256 17:32:11 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:03.256 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.249 Initializing NVMe Controllers 00:22:13.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.249 Initialization complete. Launching workers. 00:22:13.249 ======================================================== 00:22:13.249 Latency(us) 00:22:13.249 Device Information : IOPS MiB/s Average min max 00:22:13.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19236.49 75.14 3327.06 1170.72 4099.31 00:22:13.249 ======================================================== 00:22:13.249 Total : 19236.49 75.14 3327.06 1170.72 4099.31 00:22:13.249 00:22:13.249 17:32:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:13.249 17:32:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.249 17:32:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.249 17:32:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.249 17:32:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:13.249 17:32:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.249 17:32:21 -- target/tls.sh@28 -- # bdevperf_pid=3237785 00:22:13.249 17:32:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.249 17:32:21 -- target/tls.sh@31 -- # waitforlisten 3237785 /var/tmp/bdevperf.sock 00:22:13.249 17:32:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.250 17:32:21 -- common/autotest_common.sh@819 -- # '[' -z 3237785 ']' 00:22:13.250 17:32:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.250 17:32:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.250 17:32:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.250 17:32:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.250 17:32:21 -- common/autotest_common.sh@10 -- # set +x 00:22:13.510 [2024-10-13 17:32:21.801004] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:13.510 [2024-10-13 17:32:21.801070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237785 ] 00:22:13.510 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.510 [2024-10-13 17:32:21.853514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.510 [2024-10-13 17:32:21.879919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.081 17:32:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.081 17:32:22 -- common/autotest_common.sh@852 -- # return 0 00:22:14.081 17:32:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:14.341 [2024-10-13 17:32:22.724012] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.341 TLSTESTn1 00:22:14.341 17:32:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:14.601 Running I/O for 10 seconds... 00:22:24.597 00:22:24.597 Latency(us) 00:22:24.597 [2024-10-13T15:32:33.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.597 [2024-10-13T15:32:33.121Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.597 Verification LBA range: start 0x0 length 0x2000 00:22:24.597 TLSTESTn1 : 10.03 6374.65 24.90 0.00 0.00 20047.17 3986.77 48278.19 00:22:24.597 [2024-10-13T15:32:33.121Z] =================================================================================================================== 00:22:24.597 [2024-10-13T15:32:33.121Z] Total : 6374.65 24.90 0.00 0.00 20047.17 3986.77 48278.19 00:22:24.597 0 00:22:24.597 17:32:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.597 17:32:32 -- target/tls.sh@45 -- # killprocess 3237785 00:22:24.597 17:32:32 -- common/autotest_common.sh@926 -- # '[' -z 3237785 ']' 00:22:24.597 17:32:32 -- common/autotest_common.sh@930 -- # kill -0 3237785 00:22:24.597 17:32:32 -- common/autotest_common.sh@931 -- # uname 00:22:24.597 17:32:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.597 17:32:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3237785 00:22:24.597 17:32:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:24.597 17:32:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:24.597 17:32:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3237785' 00:22:24.597 killing process with pid 3237785 00:22:24.597 17:32:33 -- common/autotest_common.sh@945 -- # kill 3237785 00:22:24.597 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.597 00:22:24.597 Latency(us) 00:22:24.597 [2024-10-13T15:32:33.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.597 [2024-10-13T15:32:33.121Z] =================================================================================================================== 00:22:24.597 [2024-10-13T15:32:33.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.597 17:32:33 -- common/autotest_common.sh@950 -- # wait 3237785 00:22:24.857 17:32:33 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:24.857 17:32:33 -- common/autotest_common.sh@640 -- # local es=0 00:22:24.857 17:32:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:24.857 17:32:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:24.857 17:32:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:24.857 17:32:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:24.857 17:32:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:24.857 17:32:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:24.857 17:32:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:24.857 17:32:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:24.857 17:32:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:24.857 17:32:33 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:22:24.857 17:32:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.857 17:32:33 -- target/tls.sh@28 -- # bdevperf_pid=3240152 00:22:24.857 17:32:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.857 17:32:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.857 17:32:33 -- target/tls.sh@31 -- # waitforlisten 3240152 /var/tmp/bdevperf.sock 00:22:24.857 17:32:33 -- common/autotest_common.sh@819 -- # '[' -z 3240152 ']' 00:22:24.857 17:32:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.857 17:32:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.858 17:32:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.858 17:32:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.858 17:32:33 -- common/autotest_common.sh@10 -- # set +x 00:22:24.858 [2024-10-13 17:32:33.209418] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:24.858 [2024-10-13 17:32:33.209473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240152 ] 00:22:24.858 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.858 [2024-10-13 17:32:33.261750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.858 [2024-10-13 17:32:33.286275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.798 17:32:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.798 17:32:33 -- common/autotest_common.sh@852 -- # return 0 00:22:25.798 17:32:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:25.798 [2024-10-13 17:32:34.130468] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.798 [2024-10-13 17:32:34.141573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.798 [2024-10-13 17:32:34.142305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191b5a0 (107): Transport endpoint is not connected 00:22:25.798 [2024-10-13 17:32:34.143300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191b5a0 (9): Bad file descriptor 00:22:25.798 [2024-10-13 17:32:34.144301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.798 [2024-10-13 17:32:34.144310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:25.798 [2024-10-13 17:32:34.144315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.798 request: 00:22:25.798 { 00:22:25.798 "name": "TLSTEST", 00:22:25.798 "trtype": "tcp", 00:22:25.798 "traddr": "10.0.0.2", 00:22:25.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.798 "adrfam": "ipv4", 00:22:25.798 "trsvcid": "4420", 00:22:25.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.798 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:22:25.798 "method": "bdev_nvme_attach_controller", 00:22:25.798 "req_id": 1 00:22:25.798 } 00:22:25.798 Got JSON-RPC error response 00:22:25.798 response: 00:22:25.798 { 00:22:25.798 "code": -32602, 00:22:25.798 "message": "Invalid parameters" 00:22:25.798 } 00:22:25.798 17:32:34 -- target/tls.sh@36 -- # killprocess 3240152 00:22:25.798 17:32:34 -- common/autotest_common.sh@926 -- # '[' -z 3240152 ']' 00:22:25.798 17:32:34 -- common/autotest_common.sh@930 -- # kill -0 3240152 00:22:25.798 17:32:34 -- common/autotest_common.sh@931 -- # uname 00:22:25.799 17:32:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:25.799 17:32:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3240152 00:22:25.799 17:32:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:25.799 17:32:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:25.799 17:32:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3240152' 00:22:25.799 killing process with pid 3240152 00:22:25.799 17:32:34 -- common/autotest_common.sh@945 -- # kill 3240152 00:22:25.799 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.799 00:22:25.799 Latency(us) 00:22:25.799 [2024-10-13T15:32:34.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.799 [2024-10-13T15:32:34.323Z] =================================================================================================================== 00:22:25.799 [2024-10-13T15:32:34.323Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:25.799 17:32:34 -- common/autotest_common.sh@950 -- # wait 3240152 00:22:25.799 17:32:34 -- target/tls.sh@37 -- # return 1 00:22:26.059 17:32:34 -- common/autotest_common.sh@643 -- # es=1 00:22:26.059 17:32:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:26.059 17:32:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:26.059 17:32:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:26.059 17:32:34 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:26.059 17:32:34 -- common/autotest_common.sh@640 -- # local es=0 00:22:26.059 17:32:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:26.059 17:32:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:26.059 17:32:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:26.059 17:32:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:26.059 17:32:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:26.059 17:32:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:26.059 17:32:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.059 17:32:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.059 17:32:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:26.059 17:32:34 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:26.059 17:32:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.059 17:32:34 -- target/tls.sh@28 -- # bdevperf_pid=3240347 00:22:26.059 17:32:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.059 17:32:34 -- target/tls.sh@31 -- # waitforlisten 3240347 /var/tmp/bdevperf.sock 00:22:26.059 17:32:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.059 17:32:34 -- common/autotest_common.sh@819 -- # '[' -z 3240347 ']' 00:22:26.059 17:32:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.059 17:32:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:26.059 17:32:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.059 17:32:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:26.059 17:32:34 -- common/autotest_common.sh@10 -- # set +x 00:22:26.059 [2024-10-13 17:32:34.375616] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:26.059 [2024-10-13 17:32:34.375670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240347 ] 00:22:26.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.059 [2024-10-13 17:32:34.426887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.059 [2024-10-13 17:32:34.453284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.630 17:32:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:26.630 17:32:35 -- common/autotest_common.sh@852 -- # return 0 00:22:26.630 17:32:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:26.892 [2024-10-13 17:32:35.293393] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.892 [2024-10-13 17:32:35.304334] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:26.892 [2024-10-13 17:32:35.304353] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:26.892 [2024-10-13 17:32:35.304374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.892 [2024-10-13 17:32:35.305155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f75a0 (107): Transport endpoint is not connected 00:22:26.892 [2024-10-13 17:32:35.306150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f75a0 (9): Bad file descriptor 00:22:26.892 [2024-10-13 17:32:35.307152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.892 [2024-10-13 17:32:35.307159] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:26.892 [2024-10-13 17:32:35.307165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.892 request: 00:22:26.892 { 00:22:26.892 "name": "TLSTEST", 00:22:26.892 "trtype": "tcp", 00:22:26.892 "traddr": "10.0.0.2", 00:22:26.892 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:26.892 "adrfam": "ipv4", 00:22:26.892 "trsvcid": "4420", 00:22:26.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.892 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:26.892 "method": "bdev_nvme_attach_controller", 00:22:26.892 "req_id": 1 00:22:26.892 } 00:22:26.892 Got JSON-RPC error response 00:22:26.892 response: 00:22:26.892 { 00:22:26.892 "code": -32602, 00:22:26.892 "message": "Invalid parameters" 00:22:26.892 } 00:22:26.892 17:32:35 -- target/tls.sh@36 -- # killprocess 3240347 00:22:26.892 17:32:35 -- common/autotest_common.sh@926 -- # '[' -z 3240347 ']' 00:22:26.892 17:32:35 -- common/autotest_common.sh@930 -- # kill -0 3240347 00:22:26.892 17:32:35 -- common/autotest_common.sh@931 -- # uname 00:22:26.892 17:32:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:26.892 17:32:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3240347 00:22:26.892 17:32:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:26.892 17:32:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:26.892 17:32:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3240347' 00:22:26.892 killing process with pid 3240347 00:22:26.892 17:32:35 -- common/autotest_common.sh@945 -- # kill 3240347 00:22:26.892 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.892 00:22:26.892 Latency(us) 00:22:26.892 [2024-10-13T15:32:35.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.892 [2024-10-13T15:32:35.416Z] =================================================================================================================== 00:22:26.892 [2024-10-13T15:32:35.416Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.892 17:32:35 -- common/autotest_common.sh@950 -- # wait 3240347 00:22:27.154 17:32:35 -- target/tls.sh@37 -- # return 1 00:22:27.154 17:32:35 -- common/autotest_common.sh@643 -- # es=1 00:22:27.154 17:32:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:27.154 17:32:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:27.154 17:32:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:27.154 17:32:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:27.154 17:32:35 -- common/autotest_common.sh@640 -- # local es=0 00:22:27.154 17:32:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:27.154 17:32:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:27.154 17:32:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:27.154 17:32:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:27.154 17:32:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:27.154 17:32:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:27.154 17:32:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.154 17:32:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:27.154 17:32:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.154 17:32:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:27.154 17:32:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.154 17:32:35 -- target/tls.sh@28 -- # bdevperf_pid=3240515 00:22:27.154 17:32:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.154 17:32:35 -- target/tls.sh@31 -- # waitforlisten 3240515 /var/tmp/bdevperf.sock 00:22:27.155 17:32:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.155 17:32:35 -- common/autotest_common.sh@819 -- # '[' -z 3240515 ']' 00:22:27.155 17:32:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.155 17:32:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:27.155 17:32:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.155 17:32:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:27.155 17:32:35 -- common/autotest_common.sh@10 -- # set +x 00:22:27.155 [2024-10-13 17:32:35.539464] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:27.155 [2024-10-13 17:32:35.539519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240515 ] 00:22:27.155 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.155 [2024-10-13 17:32:35.591743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.155 [2024-10-13 17:32:35.616042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.148 17:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.148 17:32:36 -- common/autotest_common.sh@852 -- # return 0 00:22:28.148 17:32:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:28.148 [2024-10-13 17:32:36.472199] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.148 [2024-10-13 17:32:36.477195] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:28.148 [2024-10-13 17:32:36.477213] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:28.148 [2024-10-13 17:32:36.477232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.148 [2024-10-13 17:32:36.478080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b5a0 (107): Transport endpoint is not connected 00:22:28.148 [2024-10-13 17:32:36.479076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b5a0 (9): Bad file descriptor 00:22:28.148 [2024-10-13 17:32:36.480078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:28.148 [2024-10-13 17:32:36.480086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.148 [2024-10-13 17:32:36.480091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:28.148 request: 00:22:28.148 { 00:22:28.148 "name": "TLSTEST", 00:22:28.148 "trtype": "tcp", 00:22:28.148 "traddr": "10.0.0.2", 00:22:28.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.148 "adrfam": "ipv4", 00:22:28.148 "trsvcid": "4420", 00:22:28.148 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.148 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:28.148 "method": "bdev_nvme_attach_controller", 00:22:28.148 "req_id": 1 00:22:28.148 } 00:22:28.148 Got JSON-RPC error response 00:22:28.148 response: 00:22:28.148 { 00:22:28.148 "code": -32602, 00:22:28.148 "message": "Invalid parameters" 00:22:28.148 } 00:22:28.148 17:32:36 -- target/tls.sh@36 -- # killprocess 3240515 00:22:28.149 17:32:36 -- common/autotest_common.sh@926 -- # '[' -z 3240515 ']' 00:22:28.149 17:32:36 -- common/autotest_common.sh@930 -- # kill -0 3240515 00:22:28.149 17:32:36 -- common/autotest_common.sh@931 -- # uname 00:22:28.149 17:32:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.149 17:32:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3240515 00:22:28.149 17:32:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:28.149 17:32:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:28.149 17:32:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3240515' 00:22:28.149 killing process with pid 3240515 00:22:28.149 17:32:36 -- common/autotest_common.sh@945 -- # kill 3240515 00:22:28.149 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.149 00:22:28.149 Latency(us) 00:22:28.149 [2024-10-13T15:32:36.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.149 [2024-10-13T15:32:36.673Z] =================================================================================================================== 00:22:28.149 [2024-10-13T15:32:36.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.149 17:32:36 -- common/autotest_common.sh@950 -- # wait 3240515 00:22:28.149 17:32:36 -- target/tls.sh@37 -- # return 1 00:22:28.149 17:32:36 -- common/autotest_common.sh@643 -- # es=1 00:22:28.149 17:32:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:28.149 17:32:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:28.149 17:32:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:28.149 17:32:36 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.149 17:32:36 -- common/autotest_common.sh@640 -- # local es=0 00:22:28.149 17:32:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.149 17:32:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:28.149 17:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.149 17:32:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:28.412 17:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.412 17:32:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.412 17:32:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.412 17:32:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.412 17:32:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.412 17:32:36 -- target/tls.sh@23 -- # psk= 00:22:28.412 17:32:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.412 17:32:36 -- target/tls.sh@28 -- # bdevperf_pid=3240858 00:22:28.412 17:32:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.412 17:32:36 -- target/tls.sh@31 -- # waitforlisten 3240858 /var/tmp/bdevperf.sock 00:22:28.413 17:32:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.413 17:32:36 -- common/autotest_common.sh@819 -- # '[' -z 3240858 ']' 00:22:28.413 17:32:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.413 17:32:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.413 17:32:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.413 17:32:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.413 17:32:36 -- common/autotest_common.sh@10 -- # set +x 00:22:28.413 [2024-10-13 17:32:36.721779] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:28.413 [2024-10-13 17:32:36.721834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240858 ] 00:22:28.413 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.413 [2024-10-13 17:32:36.773637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.413 [2024-10-13 17:32:36.799484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.413 17:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.413 17:32:36 -- common/autotest_common.sh@852 -- # return 0 00:22:28.413 17:32:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:28.675 [2024-10-13 17:32:37.028547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.675 [2024-10-13 17:32:37.030327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765fd0 (9): Bad file descriptor 00:22:28.675 [2024-10-13 17:32:37.031326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.675 [2024-10-13 17:32:37.031334] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.675 [2024-10-13 17:32:37.031340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.675 request: 00:22:28.675 { 00:22:28.675 "name": "TLSTEST", 00:22:28.675 "trtype": "tcp", 00:22:28.675 "traddr": "10.0.0.2", 00:22:28.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.675 "adrfam": "ipv4", 00:22:28.675 "trsvcid": "4420", 00:22:28.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.675 "method": "bdev_nvme_attach_controller", 00:22:28.675 "req_id": 1 00:22:28.675 } 00:22:28.675 Got JSON-RPC error response 00:22:28.675 response: 00:22:28.675 { 00:22:28.675 "code": -32602, 00:22:28.675 "message": "Invalid parameters" 00:22:28.675 } 00:22:28.675 17:32:37 -- target/tls.sh@36 -- # killprocess 3240858 00:22:28.675 17:32:37 -- common/autotest_common.sh@926 -- # '[' -z 3240858 ']' 00:22:28.675 17:32:37 -- common/autotest_common.sh@930 -- # kill -0 3240858 00:22:28.675 17:32:37 -- common/autotest_common.sh@931 -- # uname 00:22:28.675 17:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.675 17:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3240858 00:22:28.675 17:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:28.675 17:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:28.675 17:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3240858' 00:22:28.675 killing process with pid 3240858 00:22:28.675 17:32:37 -- common/autotest_common.sh@945 -- # kill 3240858 00:22:28.675 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.675 00:22:28.675 Latency(us) 00:22:28.675 [2024-10-13T15:32:37.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.675 [2024-10-13T15:32:37.199Z] =================================================================================================================== 00:22:28.675 [2024-10-13T15:32:37.199Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.675 17:32:37 -- common/autotest_common.sh@950 -- # wait 3240858 00:22:28.935 17:32:37 -- target/tls.sh@37 -- # return 1 00:22:28.935 17:32:37 -- common/autotest_common.sh@643 -- # es=1 00:22:28.935 17:32:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:28.935 17:32:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:28.935 17:32:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:28.935 17:32:37 -- target/tls.sh@167 -- # killprocess 3234994 00:22:28.935 17:32:37 -- common/autotest_common.sh@926 -- # '[' -z 3234994 ']' 00:22:28.935 17:32:37 -- common/autotest_common.sh@930 -- # kill -0 3234994 00:22:28.935 17:32:37 -- common/autotest_common.sh@931 -- # uname 00:22:28.935 17:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.935 17:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3234994 00:22:28.935 17:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:28.935 17:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:28.935 17:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3234994' 00:22:28.935 killing process with pid 3234994 00:22:28.935 17:32:37 -- common/autotest_common.sh@945 -- # kill 3234994 00:22:28.935 17:32:37 -- common/autotest_common.sh@950 -- # wait 3234994 00:22:28.935 17:32:37 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:22:28.935 17:32:37 -- target/tls.sh@49 -- # local key hash crc 00:22:28.935 17:32:37 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:28.935 17:32:37 -- target/tls.sh@51 -- # hash=02 00:22:28.935 17:32:37 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:22:28.935 17:32:37 -- target/tls.sh@52 -- # gzip -1 -c 00:22:28.935 17:32:37 -- target/tls.sh@52 -- # tail -c8 00:22:28.935 17:32:37 -- target/tls.sh@52 -- # head -c 4 00:22:28.935 17:32:37 -- target/tls.sh@52 -- # crc='�e�'\''' 00:22:28.935 17:32:37 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:28.935 17:32:37 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:22:28.935 17:32:37 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:28.935 17:32:37 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:28.935 17:32:37 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:28.935 17:32:37 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:28.935 17:32:37 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:28.935 17:32:37 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:22:28.935 17:32:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.935 17:32:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.935 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:22:28.935 17:32:37 -- nvmf/common.sh@469 -- # nvmfpid=3240912 00:22:28.935 17:32:37 -- nvmf/common.sh@470 -- # waitforlisten 3240912 00:22:28.935 17:32:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:28.935 17:32:37 -- common/autotest_common.sh@819 -- # '[' -z 3240912 ']' 00:22:28.935 17:32:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.935 17:32:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.935 17:32:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.935 17:32:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.935 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:22:29.201 [2024-10-13 17:32:37.487312] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:29.201 [2024-10-13 17:32:37.487371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.201 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.201 [2024-10-13 17:32:37.573154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.201 [2024-10-13 17:32:37.600903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.201 [2024-10-13 17:32:37.600998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.201 [2024-10-13 17:32:37.601005] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.201 [2024-10-13 17:32:37.601010] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.201 [2024-10-13 17:32:37.601024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.773 17:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.773 17:32:38 -- common/autotest_common.sh@852 -- # return 0 00:22:29.773 17:32:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:29.773 17:32:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:29.773 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:22:29.773 17:32:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.773 17:32:38 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:29.773 17:32:38 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:29.773 17:32:38 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.033 [2024-10-13 17:32:38.438230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.033 17:32:38 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.293 17:32:38 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.293 [2024-10-13 17:32:38.746984] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.293 [2024-10-13 17:32:38.747174] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.293 17:32:38 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.563 malloc0 00:22:30.563 17:32:38 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.563 17:32:39 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:30.845 17:32:39 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:30.845 17:32:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.845 17:32:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.845 17:32:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:30.845 17:32:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:30.845 17:32:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.845 17:32:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.845 17:32:39 -- target/tls.sh@28 -- # bdevperf_pid=3241283 00:22:30.845 17:32:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.845 17:32:39 -- target/tls.sh@31 -- # waitforlisten 3241283 /var/tmp/bdevperf.sock 00:22:30.845 17:32:39 -- common/autotest_common.sh@819 -- # '[' -z 3241283 ']' 00:22:30.845 17:32:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.845 17:32:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.845 17:32:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.845 17:32:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.845 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:22:30.845 [2024-10-13 17:32:39.223363] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:30.845 [2024-10-13 17:32:39.223405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241283 ] 00:22:30.845 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.845 [2024-10-13 17:32:39.267884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.845 [2024-10-13 17:32:39.294401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.155 17:32:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:31.155 17:32:39 -- common/autotest_common.sh@852 -- # return 0 00:22:31.155 17:32:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:31.155 [2024-10-13 17:32:39.524991] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.155 TLSTESTn1 00:22:31.155 17:32:39 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:31.433 Running I/O for 10 seconds... 00:22:41.462 00:22:41.462 Latency(us) 00:22:41.462 [2024-10-13T15:32:49.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.462 [2024-10-13T15:32:49.986Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.462 Verification LBA range: start 0x0 length 0x2000 00:22:41.462 TLSTESTn1 : 10.02 6749.70 26.37 0.00 0.00 18944.30 3659.09 50899.63 00:22:41.462 [2024-10-13T15:32:49.986Z] =================================================================================================================== 00:22:41.462 [2024-10-13T15:32:49.986Z] Total : 6749.70 26.37 0.00 0.00 18944.30 3659.09 50899.63 00:22:41.462 0 00:22:41.462 17:32:49 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.462 17:32:49 -- target/tls.sh@45 -- # killprocess 3241283 00:22:41.462 17:32:49 -- common/autotest_common.sh@926 -- # '[' -z 3241283 ']' 00:22:41.462 17:32:49 -- common/autotest_common.sh@930 -- # kill -0 3241283 00:22:41.462 17:32:49 -- common/autotest_common.sh@931 -- # uname 00:22:41.462 17:32:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.462 17:32:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3241283 00:22:41.462 17:32:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:41.462 17:32:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:41.462 17:32:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3241283' 00:22:41.462 killing process with pid 3241283 00:22:41.462 17:32:49 -- common/autotest_common.sh@945 -- # kill 3241283 00:22:41.462 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.462 00:22:41.462 Latency(us) 00:22:41.462 [2024-10-13T15:32:49.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.462 [2024-10-13T15:32:49.986Z] =================================================================================================================== 00:22:41.462 [2024-10-13T15:32:49.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.463 17:32:49 -- common/autotest_common.sh@950 -- # wait 3241283 00:22:41.463 17:32:49 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.463 17:32:49 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.463 17:32:49 -- common/autotest_common.sh@640 -- # local es=0 00:22:41.463 17:32:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.463 17:32:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:41.463 17:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.463 17:32:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:41.463 17:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.463 17:32:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.463 17:32:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.463 17:32:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.463 17:32:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.463 17:32:49 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:41.463 17:32:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.463 17:32:49 -- target/tls.sh@28 -- # bdevperf_pid=3243464 00:22:41.463 17:32:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.463 17:32:49 -- target/tls.sh@31 -- # waitforlisten 3243464 /var/tmp/bdevperf.sock 00:22:41.463 17:32:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.463 17:32:49 -- common/autotest_common.sh@819 -- # '[' -z 3243464 ']' 00:22:41.463 17:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.463 17:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.463 17:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.463 17:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.463 17:32:49 -- common/autotest_common.sh@10 -- # set +x 00:22:41.724 [2024-10-13 17:32:50.003538] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:41.724 [2024-10-13 17:32:50.003592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243464 ] 00:22:41.724 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.724 [2024-10-13 17:32:50.057677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.724 [2024-10-13 17:32:50.084129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.297 17:32:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:42.297 17:32:50 -- common/autotest_common.sh@852 -- # return 0 00:22:42.297 17:32:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:42.558 [2024-10-13 17:32:50.937028] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.558 [2024-10-13 17:32:50.937061] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:42.558 request: 00:22:42.558 { 00:22:42.558 "name": "TLSTEST", 00:22:42.558 "trtype": "tcp", 00:22:42.558 "traddr": "10.0.0.2", 00:22:42.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.558 "adrfam": "ipv4", 00:22:42.558 "trsvcid": "4420", 00:22:42.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.558 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:42.558 "method": "bdev_nvme_attach_controller", 00:22:42.558 "req_id": 1 00:22:42.558 } 00:22:42.558 Got JSON-RPC error response 00:22:42.558 response: 00:22:42.558 { 00:22:42.558 "code": -22, 00:22:42.558 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:42.558 } 00:22:42.558 17:32:50 -- target/tls.sh@36 -- # killprocess 3243464 00:22:42.558 17:32:50 -- common/autotest_common.sh@926 -- # '[' -z 3243464 ']' 00:22:42.558 17:32:50 -- common/autotest_common.sh@930 -- # kill -0 3243464 00:22:42.558 17:32:50 -- common/autotest_common.sh@931 -- # uname 00:22:42.558 17:32:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:42.558 17:32:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3243464 00:22:42.558 17:32:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:42.558 17:32:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:42.558 17:32:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3243464' 00:22:42.558 killing process with pid 3243464 00:22:42.558 17:32:51 -- common/autotest_common.sh@945 -- # kill 3243464 00:22:42.558 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.558 00:22:42.558 Latency(us) 00:22:42.558 [2024-10-13T15:32:51.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.558 [2024-10-13T15:32:51.082Z] =================================================================================================================== 00:22:42.558 [2024-10-13T15:32:51.082Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:42.558 17:32:51 -- common/autotest_common.sh@950 -- # wait 3243464 00:22:42.819 17:32:51 -- target/tls.sh@37 -- # return 1 00:22:42.819 17:32:51 -- common/autotest_common.sh@643 -- # es=1 00:22:42.820 17:32:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:42.820 17:32:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:42.820 17:32:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:42.820 17:32:51 -- target/tls.sh@183 -- # killprocess 3240912 00:22:42.820 17:32:51 -- common/autotest_common.sh@926 -- # '[' -z 3240912 ']' 00:22:42.820 17:32:51 -- common/autotest_common.sh@930 -- # kill -0 3240912 00:22:42.820 17:32:51 -- common/autotest_common.sh@931 -- # uname 00:22:42.820 17:32:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:42.820 17:32:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3240912 00:22:42.820 17:32:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:42.820 17:32:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:42.820 17:32:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3240912' 00:22:42.820 killing process with pid 3240912 00:22:42.820 17:32:51 -- common/autotest_common.sh@945 -- # kill 3240912 00:22:42.820 17:32:51 -- common/autotest_common.sh@950 -- # wait 3240912 00:22:42.820 17:32:51 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:42.820 17:32:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:42.820 17:32:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:42.820 17:32:51 -- common/autotest_common.sh@10 -- # set +x 00:22:42.820 17:32:51 -- nvmf/common.sh@469 -- # nvmfpid=3243656 00:22:42.820 17:32:51 -- nvmf/common.sh@470 -- # waitforlisten 3243656 00:22:42.820 17:32:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:42.820 17:32:51 -- common/autotest_common.sh@819 -- # '[' -z 3243656 ']' 00:22:42.820 17:32:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.820 17:32:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:42.820 17:32:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.820 17:32:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:42.820 17:32:51 -- common/autotest_common.sh@10 -- # set +x 00:22:43.081 [2024-10-13 17:32:51.367402] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:43.081 [2024-10-13 17:32:51.367472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.081 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.081 [2024-10-13 17:32:51.451322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.081 [2024-10-13 17:32:51.478207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:43.081 [2024-10-13 17:32:51.478299] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.081 [2024-10-13 17:32:51.478305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.081 [2024-10-13 17:32:51.478310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.081 [2024-10-13 17:32:51.478325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.656 17:32:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.656 17:32:52 -- common/autotest_common.sh@852 -- # return 0 00:22:43.656 17:32:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:43.656 17:32:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:43.656 17:32:52 -- common/autotest_common.sh@10 -- # set +x 00:22:43.656 17:32:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.656 17:32:52 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:43.656 17:32:52 -- common/autotest_common.sh@640 -- # local es=0 00:22:43.656 17:32:52 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:43.656 17:32:52 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:43.656 17:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:43.656 17:32:52 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:43.656 17:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:43.656 17:32:52 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:43.656 17:32:52 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:43.656 17:32:52 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.918 [2024-10-13 17:32:52.319024] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.918 17:32:52 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.179 17:32:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.179 [2024-10-13 17:32:52.619763] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.179 [2024-10-13 17:32:52.619951] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.179 17:32:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.440 malloc0 00:22:44.440 17:32:52 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.440 17:32:52 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:44.701 [2024-10-13 17:32:53.082792] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:44.701 [2024-10-13 17:32:53.082812] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:44.701 [2024-10-13 17:32:53.082825] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:44.701 request: 00:22:44.701 { 00:22:44.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.701 "host": "nqn.2016-06.io.spdk:host1", 00:22:44.701 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:44.701 "method": "nvmf_subsystem_add_host", 00:22:44.701 "req_id": 1 00:22:44.701 } 00:22:44.701 Got JSON-RPC error response 00:22:44.701 response: 00:22:44.701 { 00:22:44.701 "code": -32603, 00:22:44.701 "message": "Internal error" 00:22:44.701 } 00:22:44.701 17:32:53 -- common/autotest_common.sh@643 -- # es=1 00:22:44.701 17:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:44.701 17:32:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:44.701 17:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:44.701 17:32:53 -- target/tls.sh@189 -- # killprocess 3243656 00:22:44.701 17:32:53 -- common/autotest_common.sh@926 -- # '[' -z 3243656 ']' 00:22:44.701 17:32:53 -- common/autotest_common.sh@930 -- # kill -0 3243656 00:22:44.701 17:32:53 -- common/autotest_common.sh@931 -- # uname 00:22:44.701 17:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.701 17:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3243656 00:22:44.701 17:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:44.701 17:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:44.701 17:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3243656' 00:22:44.701 killing process with pid 3243656 00:22:44.701 17:32:53 -- common/autotest_common.sh@945 -- # kill 3243656 00:22:44.701 17:32:53 -- common/autotest_common.sh@950 -- # wait 3243656 00:22:44.963 17:32:53 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:44.963 17:32:53 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:44.963 17:32:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:44.963 17:32:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:44.963 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:22:44.963 17:32:53 -- nvmf/common.sh@469 -- # nvmfpid=3244112 00:22:44.963 17:32:53 -- nvmf/common.sh@470 -- # waitforlisten 3244112 00:22:44.963 17:32:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.963 17:32:53 -- common/autotest_common.sh@819 -- # '[' -z 3244112 ']' 00:22:44.963 17:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.963 17:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:44.963 17:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.963 17:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:44.963 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:22:44.963 [2024-10-13 17:32:53.303217] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:44.963 [2024-10-13 17:32:53.303274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.963 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.963 [2024-10-13 17:32:53.387492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.963 [2024-10-13 17:32:53.414745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.963 [2024-10-13 17:32:53.414843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.963 [2024-10-13 17:32:53.414849] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.963 [2024-10-13 17:32:53.414855] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.963 [2024-10-13 17:32:53.414873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.906 17:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.906 17:32:54 -- common/autotest_common.sh@852 -- # return 0 00:22:45.906 17:32:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:45.906 17:32:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:45.906 17:32:54 -- common/autotest_common.sh@10 -- # set +x 00:22:45.906 17:32:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.906 17:32:54 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:45.906 17:32:54 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:45.906 17:32:54 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.906 [2024-10-13 17:32:54.259535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.906 17:32:54 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.167 17:32:54 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.167 [2024-10-13 17:32:54.560276] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.167 [2024-10-13 17:32:54.560466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.167 17:32:54 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.428 malloc0 00:22:46.428 17:32:54 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.428 17:32:54 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:46.689 17:32:55 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.689 17:32:55 -- target/tls.sh@197 -- # bdevperf_pid=3244504 00:22:46.689 17:32:55 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.689 17:32:55 -- target/tls.sh@200 -- # waitforlisten 3244504 /var/tmp/bdevperf.sock 00:22:46.689 17:32:55 -- common/autotest_common.sh@819 -- # '[' -z 3244504 ']' 00:22:46.689 17:32:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.689 17:32:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:46.689 17:32:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.689 17:32:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:46.689 17:32:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.689 [2024-10-13 17:32:55.060470] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:46.689 [2024-10-13 17:32:55.060519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244504 ] 00:22:46.689 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.689 [2024-10-13 17:32:55.111772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.689 [2024-10-13 17:32:55.138400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.631 17:32:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:47.631 17:32:55 -- common/autotest_common.sh@852 -- # return 0 00:22:47.631 17:32:55 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:47.631 [2024-10-13 17:32:56.010748] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.631 TLSTESTn1 00:22:47.631 17:32:56 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:47.892 17:32:56 -- target/tls.sh@205 -- # tgtconf='{ 00:22:47.892 "subsystems": [ 00:22:47.892 { 00:22:47.893 "subsystem": "iobuf", 00:22:47.893 "config": [ 00:22:47.893 { 00:22:47.893 "method": "iobuf_set_options", 00:22:47.893 "params": { 00:22:47.893 "small_pool_count": 8192, 00:22:47.893 "large_pool_count": 1024, 00:22:47.893 "small_bufsize": 8192, 00:22:47.893 "large_bufsize": 135168 00:22:47.893 } 00:22:47.893 } 00:22:47.893 ] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "sock", 00:22:47.893 "config": [ 00:22:47.893 { 00:22:47.893 "method": "sock_impl_set_options", 00:22:47.893 "params": { 00:22:47.893 "impl_name": "posix", 00:22:47.893 "recv_buf_size": 2097152, 00:22:47.893 "send_buf_size": 2097152, 00:22:47.893 "enable_recv_pipe": true, 00:22:47.893 "enable_quickack": false, 00:22:47.893 "enable_placement_id": 0, 00:22:47.893 "enable_zerocopy_send_server": true, 00:22:47.893 "enable_zerocopy_send_client": false, 00:22:47.893 "zerocopy_threshold": 0, 00:22:47.893 "tls_version": 0, 00:22:47.893 "enable_ktls": false 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "sock_impl_set_options", 00:22:47.893 "params": { 00:22:47.893 "impl_name": "ssl", 00:22:47.893 "recv_buf_size": 4096, 00:22:47.893 "send_buf_size": 4096, 00:22:47.893 "enable_recv_pipe": true, 00:22:47.893 "enable_quickack": false, 00:22:47.893 "enable_placement_id": 0, 00:22:47.893 "enable_zerocopy_send_server": true, 00:22:47.893 "enable_zerocopy_send_client": false, 00:22:47.893 "zerocopy_threshold": 0, 00:22:47.893 "tls_version": 0, 00:22:47.893 "enable_ktls": false 00:22:47.893 } 00:22:47.893 } 00:22:47.893 ] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "vmd", 00:22:47.893 "config": [] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "accel", 00:22:47.893 "config": [ 00:22:47.893 { 00:22:47.893 "method": "accel_set_options", 00:22:47.893 "params": { 00:22:47.893 "small_cache_size": 128, 00:22:47.893 "large_cache_size": 16, 00:22:47.893 "task_count": 2048, 00:22:47.893 "sequence_count": 2048, 00:22:47.893 "buf_count": 2048 00:22:47.893 } 00:22:47.893 } 00:22:47.893 ] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "bdev", 00:22:47.893 "config": [ 00:22:47.893 { 00:22:47.893 "method": "bdev_set_options", 00:22:47.893 "params": { 00:22:47.893 "bdev_io_pool_size": 65535, 00:22:47.893 "bdev_io_cache_size": 256, 00:22:47.893 "bdev_auto_examine": true, 00:22:47.893 "iobuf_small_cache_size": 128, 00:22:47.893 "iobuf_large_cache_size": 16 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "bdev_raid_set_options", 00:22:47.893 "params": { 00:22:47.893 "process_window_size_kb": 1024 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "bdev_iscsi_set_options", 00:22:47.893 "params": { 00:22:47.893 "timeout_sec": 30 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "bdev_nvme_set_options", 00:22:47.893 "params": { 00:22:47.893 "action_on_timeout": "none", 00:22:47.893 "timeout_us": 0, 00:22:47.893 "timeout_admin_us": 0, 00:22:47.893 "keep_alive_timeout_ms": 10000, 00:22:47.893 "transport_retry_count": 4, 00:22:47.893 "arbitration_burst": 0, 00:22:47.893 "low_priority_weight": 0, 00:22:47.893 "medium_priority_weight": 0, 00:22:47.893 "high_priority_weight": 0, 00:22:47.893 "nvme_adminq_poll_period_us": 10000, 00:22:47.893 "nvme_ioq_poll_period_us": 0, 00:22:47.893 "io_queue_requests": 0, 00:22:47.893 "delay_cmd_submit": true, 00:22:47.893 "bdev_retry_count": 3, 00:22:47.893 "transport_ack_timeout": 0, 00:22:47.893 "ctrlr_loss_timeout_sec": 0, 00:22:47.893 "reconnect_delay_sec": 0, 00:22:47.893 "fast_io_fail_timeout_sec": 0, 00:22:47.893 "generate_uuids": false, 00:22:47.893 "transport_tos": 0, 00:22:47.893 "io_path_stat": false, 00:22:47.893 "allow_accel_sequence": false 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "bdev_nvme_set_hotplug", 00:22:47.893 "params": { 00:22:47.893 "period_us": 100000, 00:22:47.893 "enable": false 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "bdev_malloc_create", 00:22:47.893 "params": { 00:22:47.893 "name": "malloc0", 00:22:47.893 "num_blocks": 8192, 00:22:47.893 "block_size": 4096, 00:22:47.893 "physical_block_size": 4096, 00:22:47.893 "uuid": "bfb5628c-6d8d-4f8b-95ff-412f4e928703", 00:22:47.893 "optimal_io_boundary": 0 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "bdev_wait_for_examine" 00:22:47.893 } 00:22:47.893 ] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "nbd", 00:22:47.893 "config": [] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "scheduler", 00:22:47.893 "config": [ 00:22:47.893 { 00:22:47.893 "method": "framework_set_scheduler", 00:22:47.893 "params": { 00:22:47.893 "name": "static" 00:22:47.893 } 00:22:47.893 } 00:22:47.893 ] 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "subsystem": "nvmf", 00:22:47.893 "config": [ 00:22:47.893 { 00:22:47.893 "method": "nvmf_set_config", 00:22:47.893 "params": { 00:22:47.893 "discovery_filter": "match_any", 00:22:47.893 "admin_cmd_passthru": { 00:22:47.893 "identify_ctrlr": false 00:22:47.893 } 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_set_max_subsystems", 00:22:47.893 "params": { 00:22:47.893 "max_subsystems": 1024 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_set_crdt", 00:22:47.893 "params": { 00:22:47.893 "crdt1": 0, 00:22:47.893 "crdt2": 0, 00:22:47.893 "crdt3": 0 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_create_transport", 00:22:47.893 "params": { 00:22:47.893 "trtype": "TCP", 00:22:47.893 "max_queue_depth": 128, 00:22:47.893 "max_io_qpairs_per_ctrlr": 127, 00:22:47.893 "in_capsule_data_size": 4096, 00:22:47.893 "max_io_size": 131072, 00:22:47.893 "io_unit_size": 131072, 00:22:47.893 "max_aq_depth": 128, 00:22:47.893 "num_shared_buffers": 511, 00:22:47.893 "buf_cache_size": 4294967295, 00:22:47.893 "dif_insert_or_strip": false, 00:22:47.893 "zcopy": false, 00:22:47.893 "c2h_success": false, 00:22:47.893 "sock_priority": 0, 00:22:47.893 "abort_timeout_sec": 1 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_create_subsystem", 00:22:47.893 "params": { 00:22:47.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.893 "allow_any_host": false, 00:22:47.893 "serial_number": "SPDK00000000000001", 00:22:47.893 "model_number": "SPDK bdev Controller", 00:22:47.893 "max_namespaces": 10, 00:22:47.893 "min_cntlid": 1, 00:22:47.893 "max_cntlid": 65519, 00:22:47.893 "ana_reporting": false 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_subsystem_add_host", 00:22:47.893 "params": { 00:22:47.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.893 "host": "nqn.2016-06.io.spdk:host1", 00:22:47.893 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_subsystem_add_ns", 00:22:47.893 "params": { 00:22:47.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.893 "namespace": { 00:22:47.893 "nsid": 1, 00:22:47.893 "bdev_name": "malloc0", 00:22:47.893 "nguid": "BFB5628C6D8D4F8B95FF412F4E928703", 00:22:47.893 "uuid": "bfb5628c-6d8d-4f8b-95ff-412f4e928703" 00:22:47.893 } 00:22:47.893 } 00:22:47.893 }, 00:22:47.893 { 00:22:47.893 "method": "nvmf_subsystem_add_listener", 00:22:47.893 "params": { 00:22:47.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.893 "listen_address": { 00:22:47.893 "trtype": "TCP", 00:22:47.893 "adrfam": "IPv4", 00:22:47.893 "traddr": "10.0.0.2", 00:22:47.893 "trsvcid": "4420" 00:22:47.893 }, 00:22:47.893 "secure_channel": true 00:22:47.893 } 00:22:47.893 } 00:22:47.893 ] 00:22:47.894 } 00:22:47.894 ] 00:22:47.894 }' 00:22:47.894 17:32:56 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:48.155 17:32:56 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:48.155 "subsystems": [ 00:22:48.155 { 00:22:48.155 "subsystem": "iobuf", 00:22:48.155 "config": [ 00:22:48.155 { 00:22:48.155 "method": "iobuf_set_options", 00:22:48.155 "params": { 00:22:48.155 "small_pool_count": 8192, 00:22:48.155 "large_pool_count": 1024, 00:22:48.155 "small_bufsize": 8192, 00:22:48.155 "large_bufsize": 135168 00:22:48.155 } 00:22:48.155 } 00:22:48.155 ] 00:22:48.155 }, 00:22:48.155 { 00:22:48.155 "subsystem": "sock", 00:22:48.155 "config": [ 00:22:48.155 { 00:22:48.155 "method": "sock_impl_set_options", 00:22:48.155 "params": { 00:22:48.155 "impl_name": "posix", 00:22:48.155 "recv_buf_size": 2097152, 00:22:48.155 "send_buf_size": 2097152, 00:22:48.155 "enable_recv_pipe": true, 00:22:48.155 "enable_quickack": false, 00:22:48.155 "enable_placement_id": 0, 00:22:48.155 "enable_zerocopy_send_server": true, 00:22:48.155 "enable_zerocopy_send_client": false, 00:22:48.155 "zerocopy_threshold": 0, 00:22:48.155 "tls_version": 0, 00:22:48.155 "enable_ktls": false 00:22:48.155 } 00:22:48.155 }, 00:22:48.155 { 00:22:48.155 "method": "sock_impl_set_options", 00:22:48.155 "params": { 00:22:48.155 "impl_name": "ssl", 00:22:48.155 "recv_buf_size": 4096, 00:22:48.155 "send_buf_size": 4096, 00:22:48.155 "enable_recv_pipe": true, 00:22:48.155 "enable_quickack": false, 00:22:48.155 "enable_placement_id": 0, 00:22:48.155 "enable_zerocopy_send_server": true, 00:22:48.155 "enable_zerocopy_send_client": false, 00:22:48.155 "zerocopy_threshold": 0, 00:22:48.155 "tls_version": 0, 00:22:48.155 "enable_ktls": false 00:22:48.156 } 00:22:48.156 } 00:22:48.156 ] 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "subsystem": "vmd", 00:22:48.156 "config": [] 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "subsystem": "accel", 00:22:48.156 "config": [ 00:22:48.156 { 00:22:48.156 "method": "accel_set_options", 00:22:48.156 "params": { 00:22:48.156 "small_cache_size": 128, 00:22:48.156 "large_cache_size": 16, 00:22:48.156 "task_count": 2048, 00:22:48.156 "sequence_count": 2048, 00:22:48.156 "buf_count": 2048 00:22:48.156 } 00:22:48.156 } 00:22:48.156 ] 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "subsystem": "bdev", 00:22:48.156 "config": [ 00:22:48.156 { 00:22:48.156 "method": "bdev_set_options", 00:22:48.156 "params": { 00:22:48.156 "bdev_io_pool_size": 65535, 00:22:48.156 "bdev_io_cache_size": 256, 00:22:48.156 "bdev_auto_examine": true, 00:22:48.156 "iobuf_small_cache_size": 128, 00:22:48.156 "iobuf_large_cache_size": 16 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "method": "bdev_raid_set_options", 00:22:48.156 "params": { 00:22:48.156 "process_window_size_kb": 1024 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "method": "bdev_iscsi_set_options", 00:22:48.156 "params": { 00:22:48.156 "timeout_sec": 30 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "method": "bdev_nvme_set_options", 00:22:48.156 "params": { 00:22:48.156 "action_on_timeout": "none", 00:22:48.156 "timeout_us": 0, 00:22:48.156 "timeout_admin_us": 0, 00:22:48.156 "keep_alive_timeout_ms": 10000, 00:22:48.156 "transport_retry_count": 4, 00:22:48.156 "arbitration_burst": 0, 00:22:48.156 "low_priority_weight": 0, 00:22:48.156 "medium_priority_weight": 0, 00:22:48.156 "high_priority_weight": 0, 00:22:48.156 "nvme_adminq_poll_period_us": 10000, 00:22:48.156 "nvme_ioq_poll_period_us": 0, 00:22:48.156 "io_queue_requests": 512, 00:22:48.156 "delay_cmd_submit": true, 00:22:48.156 "bdev_retry_count": 3, 00:22:48.156 "transport_ack_timeout": 0, 00:22:48.156 "ctrlr_loss_timeout_sec": 0, 00:22:48.156 "reconnect_delay_sec": 0, 00:22:48.156 "fast_io_fail_timeout_sec": 0, 00:22:48.156 "generate_uuids": false, 00:22:48.156 "transport_tos": 0, 00:22:48.156 "io_path_stat": false, 00:22:48.156 "allow_accel_sequence": false 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "method": "bdev_nvme_attach_controller", 00:22:48.156 "params": { 00:22:48.156 "name": "TLSTEST", 00:22:48.156 "trtype": "TCP", 00:22:48.156 "adrfam": "IPv4", 00:22:48.156 "traddr": "10.0.0.2", 00:22:48.156 "trsvcid": "4420", 00:22:48.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.156 "prchk_reftag": false, 00:22:48.156 "prchk_guard": false, 00:22:48.156 "ctrlr_loss_timeout_sec": 0, 00:22:48.156 "reconnect_delay_sec": 0, 00:22:48.156 "fast_io_fail_timeout_sec": 0, 00:22:48.156 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:48.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.156 "hdgst": false, 00:22:48.156 "ddgst": false 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "method": "bdev_nvme_set_hotplug", 00:22:48.156 "params": { 00:22:48.156 "period_us": 100000, 00:22:48.156 "enable": false 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "method": "bdev_wait_for_examine" 00:22:48.156 } 00:22:48.156 ] 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "subsystem": "nbd", 00:22:48.156 "config": [] 00:22:48.156 } 00:22:48.156 ] 00:22:48.156 }' 00:22:48.156 17:32:56 -- target/tls.sh@208 -- # killprocess 3244504 00:22:48.156 17:32:56 -- common/autotest_common.sh@926 -- # '[' -z 3244504 ']' 00:22:48.156 17:32:56 -- common/autotest_common.sh@930 -- # kill -0 3244504 00:22:48.156 17:32:56 -- common/autotest_common.sh@931 -- # uname 00:22:48.156 17:32:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:48.156 17:32:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3244504 00:22:48.156 17:32:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:48.156 17:32:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:48.156 17:32:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3244504' 00:22:48.156 killing process with pid 3244504 00:22:48.156 17:32:56 -- common/autotest_common.sh@945 -- # kill 3244504 00:22:48.156 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.156 00:22:48.156 Latency(us) 00:22:48.156 [2024-10-13T15:32:56.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.156 [2024-10-13T15:32:56.680Z] =================================================================================================================== 00:22:48.156 [2024-10-13T15:32:56.680Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.156 17:32:56 -- common/autotest_common.sh@950 -- # wait 3244504 00:22:48.418 17:32:56 -- target/tls.sh@209 -- # killprocess 3244112 00:22:48.418 17:32:56 -- common/autotest_common.sh@926 -- # '[' -z 3244112 ']' 00:22:48.418 17:32:56 -- common/autotest_common.sh@930 -- # kill -0 3244112 00:22:48.418 17:32:56 -- common/autotest_common.sh@931 -- # uname 00:22:48.418 17:32:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:48.418 17:32:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3244112 00:22:48.418 17:32:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:48.418 17:32:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:48.418 17:32:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3244112' 00:22:48.418 killing process with pid 3244112 00:22:48.418 17:32:56 -- common/autotest_common.sh@945 -- # kill 3244112 00:22:48.418 17:32:56 -- common/autotest_common.sh@950 -- # wait 3244112 00:22:48.418 17:32:56 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:48.418 17:32:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:48.418 17:32:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:48.418 17:32:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.418 17:32:56 -- target/tls.sh@212 -- # echo '{ 00:22:48.418 "subsystems": [ 00:22:48.418 { 00:22:48.418 "subsystem": "iobuf", 00:22:48.418 "config": [ 00:22:48.418 { 00:22:48.418 "method": "iobuf_set_options", 00:22:48.418 "params": { 00:22:48.418 "small_pool_count": 8192, 00:22:48.418 "large_pool_count": 1024, 00:22:48.418 "small_bufsize": 8192, 00:22:48.418 "large_bufsize": 135168 00:22:48.418 } 00:22:48.418 } 00:22:48.418 ] 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "subsystem": "sock", 00:22:48.418 "config": [ 00:22:48.418 { 00:22:48.418 "method": "sock_impl_set_options", 00:22:48.418 "params": { 00:22:48.418 "impl_name": "posix", 00:22:48.418 "recv_buf_size": 2097152, 00:22:48.418 "send_buf_size": 2097152, 00:22:48.418 "enable_recv_pipe": true, 00:22:48.418 "enable_quickack": false, 00:22:48.418 "enable_placement_id": 0, 00:22:48.418 "enable_zerocopy_send_server": true, 00:22:48.418 "enable_zerocopy_send_client": false, 00:22:48.418 "zerocopy_threshold": 0, 00:22:48.418 "tls_version": 0, 00:22:48.418 "enable_ktls": false 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "sock_impl_set_options", 00:22:48.418 "params": { 00:22:48.418 "impl_name": "ssl", 00:22:48.418 "recv_buf_size": 4096, 00:22:48.418 "send_buf_size": 4096, 00:22:48.418 "enable_recv_pipe": true, 00:22:48.418 "enable_quickack": false, 00:22:48.418 "enable_placement_id": 0, 00:22:48.418 "enable_zerocopy_send_server": true, 00:22:48.418 "enable_zerocopy_send_client": false, 00:22:48.418 "zerocopy_threshold": 0, 00:22:48.418 "tls_version": 0, 00:22:48.418 "enable_ktls": false 00:22:48.418 } 00:22:48.418 } 00:22:48.418 ] 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "subsystem": "vmd", 00:22:48.418 "config": [] 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "subsystem": "accel", 00:22:48.418 "config": [ 00:22:48.418 { 00:22:48.418 "method": "accel_set_options", 00:22:48.418 "params": { 00:22:48.418 "small_cache_size": 128, 00:22:48.418 "large_cache_size": 16, 00:22:48.418 "task_count": 2048, 00:22:48.418 "sequence_count": 2048, 00:22:48.418 "buf_count": 2048 00:22:48.418 } 00:22:48.418 } 00:22:48.418 ] 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "subsystem": "bdev", 00:22:48.418 "config": [ 00:22:48.418 { 00:22:48.418 "method": "bdev_set_options", 00:22:48.418 "params": { 00:22:48.418 "bdev_io_pool_size": 65535, 00:22:48.418 "bdev_io_cache_size": 256, 00:22:48.418 "bdev_auto_examine": true, 00:22:48.418 "iobuf_small_cache_size": 128, 00:22:48.418 "iobuf_large_cache_size": 16 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "bdev_raid_set_options", 00:22:48.418 "params": { 00:22:48.418 "process_window_size_kb": 1024 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "bdev_iscsi_set_options", 00:22:48.418 "params": { 00:22:48.418 "timeout_sec": 30 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "bdev_nvme_set_options", 00:22:48.418 "params": { 00:22:48.418 "action_on_timeout": "none", 00:22:48.418 "timeout_us": 0, 00:22:48.418 "timeout_admin_us": 0, 00:22:48.418 "keep_alive_timeout_ms": 10000, 00:22:48.418 "transport_retry_count": 4, 00:22:48.418 "arbitration_burst": 0, 00:22:48.418 "low_priority_weight": 0, 00:22:48.418 "medium_priority_weight": 0, 00:22:48.418 "high_priority_weight": 0, 00:22:48.418 "nvme_adminq_poll_period_us": 10000, 00:22:48.418 "nvme_ioq_poll_period_us": 0, 00:22:48.418 "io_queue_requests": 0, 00:22:48.418 "delay_cmd_submit": true, 00:22:48.418 "bdev_retry_count": 3, 00:22:48.418 "transport_ack_timeout": 0, 00:22:48.418 "ctrlr_loss_timeout_sec": 0, 00:22:48.418 "reconnect_delay_sec": 0, 00:22:48.418 "fast_io_fail_timeout_sec": 0, 00:22:48.418 "generate_uuids": false, 00:22:48.418 "transport_tos": 0, 00:22:48.418 "io_path_stat": false, 00:22:48.418 "allow_accel_sequence": false 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "bdev_nvme_set_hotplug", 00:22:48.418 "params": { 00:22:48.418 "period_us": 100000, 00:22:48.418 "enable": false 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "bdev_malloc_create", 00:22:48.418 "params": { 00:22:48.418 "name": "malloc0", 00:22:48.418 "num_blocks": 8192, 00:22:48.418 "block_size": 4096, 00:22:48.418 "physical_block_size": 4096, 00:22:48.418 "uuid": "bfb5628c-6d8d-4f8b-95ff-412f4e928703", 00:22:48.418 "optimal_io_boundary": 0 00:22:48.418 } 00:22:48.418 }, 00:22:48.418 { 00:22:48.418 "method": "bdev_wait_for_examine" 00:22:48.418 } 00:22:48.419 ] 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "subsystem": "nbd", 00:22:48.419 "config": [] 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "subsystem": "scheduler", 00:22:48.419 "config": [ 00:22:48.419 { 00:22:48.419 "method": "framework_set_scheduler", 00:22:48.419 "params": { 00:22:48.419 "name": "static" 00:22:48.419 } 00:22:48.419 } 00:22:48.419 ] 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "subsystem": "nvmf", 00:22:48.419 "config": [ 00:22:48.419 { 00:22:48.419 "method": "nvmf_set_config", 00:22:48.419 "params": { 00:22:48.419 "discovery_filter": "match_any", 00:22:48.419 "admin_cmd_passthru": { 00:22:48.419 "identify_ctrlr": false 00:22:48.419 } 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_set_max_subsystems", 00:22:48.419 "params": { 00:22:48.419 "max_subsystems": 1024 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_set_crdt", 00:22:48.419 "params": { 00:22:48.419 "crdt1": 0, 00:22:48.419 "crdt2": 0, 00:22:48.419 "crdt3": 0 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_create_transport", 00:22:48.419 "params": { 00:22:48.419 "trtype": "TCP", 00:22:48.419 "max_queue_depth": 128, 00:22:48.419 "max_io_qpairs_per_ctrlr": 127, 00:22:48.419 "in_capsule_data_size": 4096, 00:22:48.419 "max_io_size": 131072, 00:22:48.419 "io_unit_size": 131072, 00:22:48.419 "max_aq_depth": 128, 00:22:48.419 "num_shared_buffers": 511, 00:22:48.419 "buf_cache_size": 4294967295, 00:22:48.419 "dif_insert_or_strip": false, 00:22:48.419 "zcopy": false, 00:22:48.419 "c2h_success": false, 00:22:48.419 "sock_priority": 0, 00:22:48.419 "abort_timeout_sec": 1 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_create_subsystem", 00:22:48.419 "params": { 00:22:48.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.419 "allow_any_host": false, 00:22:48.419 "serial_number": "SPDK00000000000001", 00:22:48.419 "model_number": "SPDK bdev Controller", 00:22:48.419 "max_namespaces": 10, 00:22:48.419 "min_cntlid": 1, 00:22:48.419 "max_cntlid": 65519, 00:22:48.419 "ana_reporting": false 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_subsystem_add_host", 00:22:48.419 "params": { 00:22:48.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.419 "host": "nqn.2016-06.io.spdk:host1", 00:22:48.419 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_subsystem_add_ns", 00:22:48.419 "params": { 00:22:48.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.419 "namespace": { 00:22:48.419 "nsid": 1, 00:22:48.419 "bdev_name": "malloc0", 00:22:48.419 "nguid": "BFB5628C6D8D4F8B95FF412F4E928703", 00:22:48.419 "uuid": "bfb5628c-6d8d-4f8b-95ff-412f4e928703" 00:22:48.419 } 00:22:48.419 } 00:22:48.419 }, 00:22:48.419 { 00:22:48.419 "method": "nvmf_subsystem_add_listener", 00:22:48.419 "params": { 00:22:48.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.419 "listen_address": { 00:22:48.419 "trtype": "TCP", 00:22:48.419 "adrfam": "IPv4", 00:22:48.419 "traddr": "10.0.0.2", 00:22:48.419 "trsvcid": "4420" 00:22:48.419 }, 00:22:48.419 "secure_channel": true 00:22:48.419 } 00:22:48.419 } 00:22:48.419 ] 00:22:48.419 } 00:22:48.419 ] 00:22:48.419 }' 00:22:48.419 17:32:56 -- nvmf/common.sh@469 -- # nvmfpid=3244939 00:22:48.419 17:32:56 -- nvmf/common.sh@470 -- # waitforlisten 3244939 00:22:48.419 17:32:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:48.419 17:32:56 -- common/autotest_common.sh@819 -- # '[' -z 3244939 ']' 00:22:48.419 17:32:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.419 17:32:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:48.419 17:32:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.419 17:32:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:48.419 17:32:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.680 [2024-10-13 17:32:56.985358] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:48.680 [2024-10-13 17:32:56.985410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.680 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.680 [2024-10-13 17:32:57.068297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.680 [2024-10-13 17:32:57.095220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.680 [2024-10-13 17:32:57.095312] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.680 [2024-10-13 17:32:57.095318] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.680 [2024-10-13 17:32:57.095323] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.680 [2024-10-13 17:32:57.095337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.941 [2024-10-13 17:32:57.265049] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.941 [2024-10-13 17:32:57.297075] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.941 [2024-10-13 17:32:57.297274] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.514 17:32:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:49.514 17:32:57 -- common/autotest_common.sh@852 -- # return 0 00:22:49.514 17:32:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:49.514 17:32:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:49.514 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 17:32:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.514 17:32:57 -- target/tls.sh@216 -- # bdevperf_pid=3245097 00:22:49.514 17:32:57 -- target/tls.sh@217 -- # waitforlisten 3245097 /var/tmp/bdevperf.sock 00:22:49.514 17:32:57 -- common/autotest_common.sh@819 -- # '[' -z 3245097 ']' 00:22:49.514 17:32:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.514 17:32:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.514 17:32:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.514 17:32:57 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:49.514 17:32:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.514 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 17:32:57 -- target/tls.sh@213 -- # echo '{ 00:22:49.514 "subsystems": [ 00:22:49.514 { 00:22:49.514 "subsystem": "iobuf", 00:22:49.514 "config": [ 00:22:49.514 { 00:22:49.514 "method": "iobuf_set_options", 00:22:49.514 "params": { 00:22:49.514 "small_pool_count": 8192, 00:22:49.514 "large_pool_count": 1024, 00:22:49.514 "small_bufsize": 8192, 00:22:49.514 "large_bufsize": 135168 00:22:49.514 } 00:22:49.514 } 00:22:49.514 ] 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "subsystem": "sock", 00:22:49.514 "config": [ 00:22:49.514 { 00:22:49.514 "method": "sock_impl_set_options", 00:22:49.514 "params": { 00:22:49.514 "impl_name": "posix", 00:22:49.514 "recv_buf_size": 2097152, 00:22:49.514 "send_buf_size": 2097152, 00:22:49.514 "enable_recv_pipe": true, 00:22:49.514 "enable_quickack": false, 00:22:49.514 "enable_placement_id": 0, 00:22:49.514 "enable_zerocopy_send_server": true, 00:22:49.514 "enable_zerocopy_send_client": false, 00:22:49.514 "zerocopy_threshold": 0, 00:22:49.514 "tls_version": 0, 00:22:49.514 "enable_ktls": false 00:22:49.514 } 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "method": "sock_impl_set_options", 00:22:49.514 "params": { 00:22:49.514 "impl_name": "ssl", 00:22:49.514 "recv_buf_size": 4096, 00:22:49.514 "send_buf_size": 4096, 00:22:49.514 "enable_recv_pipe": true, 00:22:49.514 "enable_quickack": false, 00:22:49.514 "enable_placement_id": 0, 00:22:49.514 "enable_zerocopy_send_server": true, 00:22:49.514 "enable_zerocopy_send_client": false, 00:22:49.514 "zerocopy_threshold": 0, 00:22:49.514 "tls_version": 0, 00:22:49.514 "enable_ktls": false 00:22:49.514 } 00:22:49.514 } 00:22:49.514 ] 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "subsystem": "vmd", 00:22:49.514 "config": [] 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "subsystem": "accel", 00:22:49.514 "config": [ 00:22:49.514 { 00:22:49.514 "method": "accel_set_options", 00:22:49.514 "params": { 00:22:49.514 "small_cache_size": 128, 00:22:49.514 "large_cache_size": 16, 00:22:49.514 "task_count": 2048, 00:22:49.514 "sequence_count": 2048, 00:22:49.514 "buf_count": 2048 00:22:49.514 } 00:22:49.514 } 00:22:49.514 ] 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "subsystem": "bdev", 00:22:49.514 "config": [ 00:22:49.514 { 00:22:49.514 "method": "bdev_set_options", 00:22:49.514 "params": { 00:22:49.514 "bdev_io_pool_size": 65535, 00:22:49.514 "bdev_io_cache_size": 256, 00:22:49.514 "bdev_auto_examine": true, 00:22:49.514 "iobuf_small_cache_size": 128, 00:22:49.514 "iobuf_large_cache_size": 16 00:22:49.514 } 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "method": "bdev_raid_set_options", 00:22:49.514 "params": { 00:22:49.514 "process_window_size_kb": 1024 00:22:49.514 } 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "method": "bdev_iscsi_set_options", 00:22:49.514 "params": { 00:22:49.514 "timeout_sec": 30 00:22:49.514 } 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "method": "bdev_nvme_set_options", 00:22:49.514 "params": { 00:22:49.514 "action_on_timeout": "none", 00:22:49.514 "timeout_us": 0, 00:22:49.514 "timeout_admin_us": 0, 00:22:49.514 "keep_alive_timeout_ms": 10000, 00:22:49.514 "transport_retry_count": 4, 00:22:49.514 "arbitration_burst": 0, 00:22:49.514 "low_priority_weight": 0, 00:22:49.514 "medium_priority_weight": 0, 00:22:49.514 "high_priority_weight": 0, 00:22:49.514 "nvme_adminq_poll_period_us": 10000, 00:22:49.514 "nvme_ioq_poll_period_us": 0, 00:22:49.514 "io_queue_requests": 512, 00:22:49.514 "delay_cmd_submit": true, 00:22:49.514 "bdev_retry_count": 3, 00:22:49.514 "transport_ack_timeout": 0, 00:22:49.514 "ctrlr_loss_timeout_sec": 0, 00:22:49.514 "reconnect_delay_sec": 0, 00:22:49.514 "fast_io_fail_timeout_sec": 0, 00:22:49.514 "generate_uuids": false, 00:22:49.514 "transport_tos": 0, 00:22:49.514 "io_path_stat": false, 00:22:49.514 "allow_accel_sequence": false 00:22:49.514 } 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "method": "bdev_nvme_attach_controller", 00:22:49.514 "params": { 00:22:49.514 "name": "TLSTEST", 00:22:49.514 "trtype": "TCP", 00:22:49.514 "adrfam": "IPv4", 00:22:49.514 "traddr": "10.0.0.2", 00:22:49.514 "trsvcid": "4420", 00:22:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.514 "prchk_reftag": false, 00:22:49.514 "prchk_guard": false, 00:22:49.514 "ctrlr_loss_timeout_sec": 0, 00:22:49.514 "reconnect_delay_sec": 0, 00:22:49.514 "fast_io_fail_timeout_sec": 0, 00:22:49.514 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:49.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.514 "hdgst": false, 00:22:49.514 "ddgst": false 00:22:49.514 } 00:22:49.514 }, 00:22:49.514 { 00:22:49.514 "method": "bdev_nvme_set_hotplug", 00:22:49.514 "params": { 00:22:49.514 "period_us": 100000, 00:22:49.514 "enable": false 00:22:49.514 } 00:22:49.515 }, 00:22:49.515 { 00:22:49.515 "method": "bdev_wait_for_examine" 00:22:49.515 } 00:22:49.515 ] 00:22:49.515 }, 00:22:49.515 { 00:22:49.515 "subsystem": "nbd", 00:22:49.515 "config": [] 00:22:49.515 } 00:22:49.515 ] 00:22:49.515 }' 00:22:49.515 [2024-10-13 17:32:57.845583] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:49.515 [2024-10-13 17:32:57.845637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245097 ] 00:22:49.515 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.515 [2024-10-13 17:32:57.898339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.515 [2024-10-13 17:32:57.924876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.515 [2024-10-13 17:32:58.035575] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.457 17:32:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.457 17:32:58 -- common/autotest_common.sh@852 -- # return 0 00:22:50.457 17:32:58 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:50.457 Running I/O for 10 seconds... 00:23:00.463 00:23:00.463 Latency(us) 00:23:00.463 [2024-10-13T15:33:08.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.463 [2024-10-13T15:33:08.987Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:00.463 Verification LBA range: start 0x0 length 0x2000 00:23:00.463 TLSTESTn1 : 10.01 6791.96 26.53 0.00 0.00 18828.33 3659.09 53302.61 00:23:00.463 [2024-10-13T15:33:08.987Z] =================================================================================================================== 00:23:00.463 [2024-10-13T15:33:08.987Z] Total : 6791.96 26.53 0.00 0.00 18828.33 3659.09 53302.61 00:23:00.463 0 00:23:00.463 17:33:08 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.463 17:33:08 -- target/tls.sh@223 -- # killprocess 3245097 00:23:00.463 17:33:08 -- common/autotest_common.sh@926 -- # '[' -z 3245097 ']' 00:23:00.463 17:33:08 -- common/autotest_common.sh@930 -- # kill -0 3245097 00:23:00.464 17:33:08 -- common/autotest_common.sh@931 -- # uname 00:23:00.464 17:33:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:00.464 17:33:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3245097 00:23:00.464 17:33:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:00.464 17:33:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:00.464 17:33:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3245097' 00:23:00.464 killing process with pid 3245097 00:23:00.464 17:33:08 -- common/autotest_common.sh@945 -- # kill 3245097 00:23:00.464 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.464 00:23:00.464 Latency(us) 00:23:00.464 [2024-10-13T15:33:08.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.464 [2024-10-13T15:33:08.988Z] =================================================================================================================== 00:23:00.464 [2024-10-13T15:33:08.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.464 17:33:08 -- common/autotest_common.sh@950 -- # wait 3245097 00:23:00.464 17:33:08 -- target/tls.sh@224 -- # killprocess 3244939 00:23:00.464 17:33:08 -- common/autotest_common.sh@926 -- # '[' -z 3244939 ']' 00:23:00.464 17:33:08 -- common/autotest_common.sh@930 -- # kill -0 3244939 00:23:00.464 17:33:08 -- common/autotest_common.sh@931 -- # uname 00:23:00.464 17:33:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:00.464 17:33:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3244939 00:23:00.725 17:33:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:00.725 17:33:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:00.725 17:33:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3244939' 00:23:00.725 killing process with pid 3244939 00:23:00.725 17:33:09 -- common/autotest_common.sh@945 -- # kill 3244939 00:23:00.725 17:33:09 -- common/autotest_common.sh@950 -- # wait 3244939 00:23:00.725 17:33:09 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:23:00.725 17:33:09 -- target/tls.sh@227 -- # cleanup 00:23:00.725 17:33:09 -- target/tls.sh@15 -- # process_shm --id 0 00:23:00.725 17:33:09 -- common/autotest_common.sh@796 -- # type=--id 00:23:00.725 17:33:09 -- common/autotest_common.sh@797 -- # id=0 00:23:00.725 17:33:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:00.725 17:33:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:00.725 17:33:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:00.725 17:33:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:00.725 17:33:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:00.725 17:33:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:00.725 nvmf_trace.0 00:23:00.725 17:33:09 -- common/autotest_common.sh@811 -- # return 0 00:23:00.725 17:33:09 -- target/tls.sh@16 -- # killprocess 3245097 00:23:00.725 17:33:09 -- common/autotest_common.sh@926 -- # '[' -z 3245097 ']' 00:23:00.725 17:33:09 -- common/autotest_common.sh@930 -- # kill -0 3245097 00:23:00.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3245097) - No such process 00:23:00.725 17:33:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3245097 is not found' 00:23:00.725 Process with pid 3245097 is not found 00:23:00.725 17:33:09 -- target/tls.sh@17 -- # nvmftestfini 00:23:00.725 17:33:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:00.725 17:33:09 -- nvmf/common.sh@116 -- # sync 00:23:00.725 17:33:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:00.725 17:33:09 -- nvmf/common.sh@119 -- # set +e 00:23:00.725 17:33:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:00.725 17:33:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:00.725 rmmod nvme_tcp 00:23:00.987 rmmod nvme_fabrics 00:23:00.987 rmmod nvme_keyring 00:23:00.987 17:33:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:00.987 17:33:09 -- nvmf/common.sh@123 -- # set -e 00:23:00.987 17:33:09 -- nvmf/common.sh@124 -- # return 0 00:23:00.987 17:33:09 -- nvmf/common.sh@477 -- # '[' -n 3244939 ']' 00:23:00.987 17:33:09 -- nvmf/common.sh@478 -- # killprocess 3244939 00:23:00.987 17:33:09 -- common/autotest_common.sh@926 -- # '[' -z 3244939 ']' 00:23:00.987 17:33:09 -- common/autotest_common.sh@930 -- # kill -0 3244939 00:23:00.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3244939) - No such process 00:23:00.987 17:33:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3244939 is not found' 00:23:00.987 Process with pid 3244939 is not found 00:23:00.987 17:33:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:00.987 17:33:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:00.987 17:33:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:00.987 17:33:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.987 17:33:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:00.987 17:33:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.987 17:33:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.987 17:33:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.903 17:33:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:02.903 17:33:11 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:23:02.903 00:23:02.903 real 1m11.378s 00:23:02.903 user 1m46.873s 00:23:02.903 sys 0m23.993s 00:23:02.903 17:33:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.903 17:33:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.903 ************************************ 00:23:02.903 END TEST nvmf_tls 00:23:02.903 ************************************ 00:23:02.903 17:33:11 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:02.903 17:33:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:02.903 17:33:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:02.903 17:33:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.903 ************************************ 00:23:02.903 START TEST nvmf_fips 00:23:02.903 ************************************ 00:23:02.903 17:33:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:03.165 * Looking for test storage... 00:23:03.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:03.165 17:33:11 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.165 17:33:11 -- nvmf/common.sh@7 -- # uname -s 00:23:03.165 17:33:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.165 17:33:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.165 17:33:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.165 17:33:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.165 17:33:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.165 17:33:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.165 17:33:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.165 17:33:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.165 17:33:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.165 17:33:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.165 17:33:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.165 17:33:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.165 17:33:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.165 17:33:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.165 17:33:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.165 17:33:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.165 17:33:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.165 17:33:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.165 17:33:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.165 17:33:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.165 17:33:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.165 17:33:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.165 17:33:11 -- paths/export.sh@5 -- # export PATH 00:23:03.165 17:33:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.165 17:33:11 -- nvmf/common.sh@46 -- # : 0 00:23:03.165 17:33:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:03.165 17:33:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:03.165 17:33:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:03.165 17:33:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.165 17:33:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.165 17:33:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:03.165 17:33:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:03.165 17:33:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:03.165 17:33:11 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:03.165 17:33:11 -- fips/fips.sh@89 -- # check_openssl_version 00:23:03.165 17:33:11 -- fips/fips.sh@83 -- # local target=3.0.0 00:23:03.165 17:33:11 -- fips/fips.sh@85 -- # openssl version 00:23:03.165 17:33:11 -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:03.165 17:33:11 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:23:03.165 17:33:11 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:03.165 17:33:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:03.165 17:33:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:03.165 17:33:11 -- scripts/common.sh@335 -- # IFS=.-: 00:23:03.165 17:33:11 -- scripts/common.sh@335 -- # read -ra ver1 00:23:03.165 17:33:11 -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.165 17:33:11 -- scripts/common.sh@336 -- # read -ra ver2 00:23:03.165 17:33:11 -- scripts/common.sh@337 -- # local 'op=>=' 00:23:03.165 17:33:11 -- scripts/common.sh@339 -- # ver1_l=3 00:23:03.165 17:33:11 -- scripts/common.sh@340 -- # ver2_l=3 00:23:03.165 17:33:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:03.165 17:33:11 -- scripts/common.sh@343 -- # case "$op" in 00:23:03.165 17:33:11 -- scripts/common.sh@347 -- # : 1 00:23:03.165 17:33:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:03.165 17:33:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.165 17:33:11 -- scripts/common.sh@364 -- # decimal 3 00:23:03.165 17:33:11 -- scripts/common.sh@352 -- # local d=3 00:23:03.165 17:33:11 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:03.165 17:33:11 -- scripts/common.sh@354 -- # echo 3 00:23:03.165 17:33:11 -- scripts/common.sh@364 -- # ver1[v]=3 00:23:03.165 17:33:11 -- scripts/common.sh@365 -- # decimal 3 00:23:03.165 17:33:11 -- scripts/common.sh@352 -- # local d=3 00:23:03.165 17:33:11 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:03.165 17:33:11 -- scripts/common.sh@354 -- # echo 3 00:23:03.165 17:33:11 -- scripts/common.sh@365 -- # ver2[v]=3 00:23:03.165 17:33:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:03.165 17:33:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:03.165 17:33:11 -- scripts/common.sh@363 -- # (( v++ )) 00:23:03.165 17:33:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.165 17:33:11 -- scripts/common.sh@364 -- # decimal 1 00:23:03.165 17:33:11 -- scripts/common.sh@352 -- # local d=1 00:23:03.165 17:33:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.165 17:33:11 -- scripts/common.sh@354 -- # echo 1 00:23:03.165 17:33:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:03.165 17:33:11 -- scripts/common.sh@365 -- # decimal 0 00:23:03.165 17:33:11 -- scripts/common.sh@352 -- # local d=0 00:23:03.165 17:33:11 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:03.165 17:33:11 -- scripts/common.sh@354 -- # echo 0 00:23:03.165 17:33:11 -- scripts/common.sh@365 -- # ver2[v]=0 00:23:03.165 17:33:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:03.165 17:33:11 -- scripts/common.sh@366 -- # return 0 00:23:03.165 17:33:11 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:03.165 17:33:11 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:03.165 17:33:11 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:03.165 17:33:11 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:03.165 17:33:11 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:03.165 17:33:11 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:03.165 17:33:11 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:03.165 17:33:11 -- fips/fips.sh@113 -- # build_openssl_config 00:23:03.165 17:33:11 -- fips/fips.sh@37 -- # cat 00:23:03.165 17:33:11 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:03.165 17:33:11 -- fips/fips.sh@58 -- # cat - 00:23:03.165 17:33:11 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:03.165 17:33:11 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:03.165 17:33:11 -- fips/fips.sh@116 -- # mapfile -t providers 00:23:03.165 17:33:11 -- fips/fips.sh@116 -- # openssl list -providers 00:23:03.165 17:33:11 -- fips/fips.sh@116 -- # grep name 00:23:03.165 17:33:11 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:03.165 17:33:11 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:03.165 17:33:11 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:03.165 17:33:11 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:03.165 17:33:11 -- common/autotest_common.sh@640 -- # local es=0 00:23:03.165 17:33:11 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:03.165 17:33:11 -- fips/fips.sh@127 -- # : 00:23:03.166 17:33:11 -- common/autotest_common.sh@628 -- # local arg=openssl 00:23:03.166 17:33:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:03.166 17:33:11 -- common/autotest_common.sh@632 -- # type -t openssl 00:23:03.166 17:33:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:03.166 17:33:11 -- common/autotest_common.sh@634 -- # type -P openssl 00:23:03.166 17:33:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:03.166 17:33:11 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:23:03.166 17:33:11 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:23:03.166 17:33:11 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:23:03.426 Error setting digest 00:23:03.426 4072052A797F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:03.426 4072052A797F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:03.426 17:33:11 -- common/autotest_common.sh@643 -- # es=1 00:23:03.426 17:33:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:03.426 17:33:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:03.426 17:33:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:03.426 17:33:11 -- fips/fips.sh@130 -- # nvmftestinit 00:23:03.426 17:33:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:03.426 17:33:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.426 17:33:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:03.426 17:33:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:03.426 17:33:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:03.426 17:33:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.426 17:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.426 17:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.426 17:33:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:03.426 17:33:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:03.426 17:33:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:03.426 17:33:11 -- common/autotest_common.sh@10 -- # set +x 00:23:11.575 17:33:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:11.575 17:33:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:11.575 17:33:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:11.575 17:33:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:11.575 17:33:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:11.575 17:33:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:11.575 17:33:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:11.575 17:33:18 -- nvmf/common.sh@294 -- # net_devs=() 00:23:11.575 17:33:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:11.575 17:33:18 -- nvmf/common.sh@295 -- # e810=() 00:23:11.575 17:33:18 -- nvmf/common.sh@295 -- # local -ga e810 00:23:11.575 17:33:18 -- nvmf/common.sh@296 -- # x722=() 00:23:11.575 17:33:18 -- nvmf/common.sh@296 -- # local -ga x722 00:23:11.575 17:33:18 -- nvmf/common.sh@297 -- # mlx=() 00:23:11.575 17:33:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:11.575 17:33:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.575 17:33:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:11.575 17:33:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:11.575 17:33:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:11.575 17:33:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.575 17:33:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:11.575 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:11.575 17:33:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.575 17:33:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:11.575 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:11.575 17:33:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:11.575 17:33:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.575 17:33:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.575 17:33:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.575 17:33:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.575 17:33:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:11.575 Found net devices under 0000:31:00.0: cvl_0_0 00:23:11.575 17:33:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.575 17:33:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.575 17:33:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.575 17:33:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.575 17:33:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.575 17:33:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:11.575 Found net devices under 0000:31:00.1: cvl_0_1 00:23:11.575 17:33:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.575 17:33:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:11.575 17:33:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:11.575 17:33:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:11.575 17:33:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:11.575 17:33:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.575 17:33:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.575 17:33:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.575 17:33:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:11.575 17:33:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.575 17:33:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.575 17:33:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:11.575 17:33:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.575 17:33:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.575 17:33:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:11.575 17:33:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:11.575 17:33:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.575 17:33:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.575 17:33:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.575 17:33:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.575 17:33:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:11.575 17:33:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.575 17:33:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.575 17:33:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.575 17:33:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:11.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:23:11.575 00:23:11.575 --- 10.0.0.2 ping statistics --- 00:23:11.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.575 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:23:11.575 17:33:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:23:11.575 00:23:11.575 --- 10.0.0.1 ping statistics --- 00:23:11.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.575 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:11.575 17:33:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.575 17:33:19 -- nvmf/common.sh@410 -- # return 0 00:23:11.575 17:33:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:11.575 17:33:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.575 17:33:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:11.575 17:33:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:11.575 17:33:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.575 17:33:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:11.575 17:33:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:11.575 17:33:19 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:11.575 17:33:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:11.575 17:33:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:11.575 17:33:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.575 17:33:19 -- nvmf/common.sh@469 -- # nvmfpid=3252112 00:23:11.575 17:33:19 -- nvmf/common.sh@470 -- # waitforlisten 3252112 00:23:11.576 17:33:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.576 17:33:19 -- common/autotest_common.sh@819 -- # '[' -z 3252112 ']' 00:23:11.576 17:33:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.576 17:33:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:11.576 17:33:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.576 17:33:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:11.576 17:33:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.576 [2024-10-13 17:33:19.295658] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:11.576 [2024-10-13 17:33:19.295716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.576 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.576 [2024-10-13 17:33:19.384151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.576 [2024-10-13 17:33:19.427856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:11.576 [2024-10-13 17:33:19.427999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.576 [2024-10-13 17:33:19.428010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.576 [2024-10-13 17:33:19.428018] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.576 [2024-10-13 17:33:19.428047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.576 17:33:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:11.576 17:33:20 -- common/autotest_common.sh@852 -- # return 0 00:23:11.576 17:33:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:11.576 17:33:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:11.576 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.837 17:33:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.837 17:33:20 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:11.837 17:33:20 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:11.837 17:33:20 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.837 17:33:20 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:11.837 17:33:20 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.837 17:33:20 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.837 17:33:20 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.837 17:33:20 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.837 [2024-10-13 17:33:20.269764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.837 [2024-10-13 17:33:20.285762] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.837 [2024-10-13 17:33:20.286108] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.837 malloc0 00:23:11.837 17:33:20 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.837 17:33:20 -- fips/fips.sh@147 -- # bdevperf_pid=3252465 00:23:11.837 17:33:20 -- fips/fips.sh@148 -- # waitforlisten 3252465 /var/tmp/bdevperf.sock 00:23:11.837 17:33:20 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.837 17:33:20 -- common/autotest_common.sh@819 -- # '[' -z 3252465 ']' 00:23:11.837 17:33:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.837 17:33:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:11.837 17:33:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.837 17:33:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:11.837 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:23:12.098 [2024-10-13 17:33:20.414041] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:12.098 [2024-10-13 17:33:20.414117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252465 ] 00:23:12.098 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.098 [2024-10-13 17:33:20.470751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.098 [2024-10-13 17:33:20.505452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.040 17:33:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:13.040 17:33:21 -- common/autotest_common.sh@852 -- # return 0 00:23:13.040 17:33:21 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:13.040 [2024-10-13 17:33:21.342346] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.040 TLSTESTn1 00:23:13.040 17:33:21 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.040 Running I/O for 10 seconds... 00:23:23.037 00:23:23.037 Latency(us) 00:23:23.037 [2024-10-13T15:33:31.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.037 [2024-10-13T15:33:31.561Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.037 Verification LBA range: start 0x0 length 0x2000 00:23:23.037 TLSTESTn1 : 10.02 6235.99 24.36 0.00 0.00 20506.67 3713.71 55487.15 00:23:23.037 [2024-10-13T15:33:31.561Z] =================================================================================================================== 00:23:23.037 [2024-10-13T15:33:31.561Z] Total : 6235.99 24.36 0.00 0.00 20506.67 3713.71 55487.15 00:23:23.037 0 00:23:23.298 17:33:31 -- fips/fips.sh@1 -- # cleanup 00:23:23.298 17:33:31 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:23.298 17:33:31 -- common/autotest_common.sh@796 -- # type=--id 00:23:23.298 17:33:31 -- common/autotest_common.sh@797 -- # id=0 00:23:23.298 17:33:31 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:23.298 17:33:31 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:23.298 17:33:31 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:23.298 17:33:31 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:23.298 17:33:31 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:23.298 17:33:31 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:23.298 nvmf_trace.0 00:23:23.298 17:33:31 -- common/autotest_common.sh@811 -- # return 0 00:23:23.298 17:33:31 -- fips/fips.sh@16 -- # killprocess 3252465 00:23:23.298 17:33:31 -- common/autotest_common.sh@926 -- # '[' -z 3252465 ']' 00:23:23.298 17:33:31 -- common/autotest_common.sh@930 -- # kill -0 3252465 00:23:23.298 17:33:31 -- common/autotest_common.sh@931 -- # uname 00:23:23.298 17:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:23.298 17:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3252465 00:23:23.298 17:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:23.298 17:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:23.298 17:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3252465' 00:23:23.298 killing process with pid 3252465 00:23:23.298 17:33:31 -- common/autotest_common.sh@945 -- # kill 3252465 00:23:23.298 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.298 00:23:23.298 Latency(us) 00:23:23.298 [2024-10-13T15:33:31.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.298 [2024-10-13T15:33:31.822Z] =================================================================================================================== 00:23:23.298 [2024-10-13T15:33:31.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.298 17:33:31 -- common/autotest_common.sh@950 -- # wait 3252465 00:23:23.559 17:33:31 -- fips/fips.sh@17 -- # nvmftestfini 00:23:23.559 17:33:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:23.559 17:33:31 -- nvmf/common.sh@116 -- # sync 00:23:23.559 17:33:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:23.559 17:33:31 -- nvmf/common.sh@119 -- # set +e 00:23:23.559 17:33:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:23.559 17:33:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:23.559 rmmod nvme_tcp 00:23:23.559 rmmod nvme_fabrics 00:23:23.559 rmmod nvme_keyring 00:23:23.559 17:33:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:23.559 17:33:31 -- nvmf/common.sh@123 -- # set -e 00:23:23.559 17:33:31 -- nvmf/common.sh@124 -- # return 0 00:23:23.559 17:33:31 -- nvmf/common.sh@477 -- # '[' -n 3252112 ']' 00:23:23.559 17:33:31 -- nvmf/common.sh@478 -- # killprocess 3252112 00:23:23.559 17:33:31 -- common/autotest_common.sh@926 -- # '[' -z 3252112 ']' 00:23:23.559 17:33:31 -- common/autotest_common.sh@930 -- # kill -0 3252112 00:23:23.559 17:33:31 -- common/autotest_common.sh@931 -- # uname 00:23:23.559 17:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:23.559 17:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3252112 00:23:23.559 17:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:23.559 17:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:23.559 17:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3252112' 00:23:23.559 killing process with pid 3252112 00:23:23.559 17:33:31 -- common/autotest_common.sh@945 -- # kill 3252112 00:23:23.559 17:33:31 -- common/autotest_common.sh@950 -- # wait 3252112 00:23:23.820 17:33:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:23.820 17:33:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:23.820 17:33:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:23.820 17:33:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.820 17:33:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:23.820 17:33:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.820 17:33:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.820 17:33:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.733 17:33:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:25.733 17:33:34 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:25.733 00:23:25.733 real 0m22.756s 00:23:25.733 user 0m23.787s 00:23:25.733 sys 0m9.714s 00:23:25.733 17:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.733 17:33:34 -- common/autotest_common.sh@10 -- # set +x 00:23:25.733 ************************************ 00:23:25.733 END TEST nvmf_fips 00:23:25.733 ************************************ 00:23:25.733 17:33:34 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:23:25.733 17:33:34 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:25.733 17:33:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:25.733 17:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:25.733 17:33:34 -- common/autotest_common.sh@10 -- # set +x 00:23:25.733 ************************************ 00:23:25.733 START TEST nvmf_fuzz 00:23:25.733 ************************************ 00:23:25.733 17:33:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:25.994 * Looking for test storage... 00:23:25.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.994 17:33:34 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.994 17:33:34 -- nvmf/common.sh@7 -- # uname -s 00:23:25.994 17:33:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.994 17:33:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.994 17:33:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.994 17:33:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.994 17:33:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.994 17:33:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.994 17:33:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.994 17:33:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.994 17:33:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.994 17:33:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.994 17:33:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.994 17:33:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.995 17:33:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.995 17:33:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.995 17:33:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.995 17:33:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.995 17:33:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.995 17:33:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.995 17:33:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.995 17:33:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.995 17:33:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.995 17:33:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.995 17:33:34 -- paths/export.sh@5 -- # export PATH 00:23:25.995 17:33:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.995 17:33:34 -- nvmf/common.sh@46 -- # : 0 00:23:25.995 17:33:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:25.995 17:33:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:25.995 17:33:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:25.995 17:33:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.995 17:33:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.995 17:33:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:25.995 17:33:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:25.995 17:33:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:25.995 17:33:34 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:25.995 17:33:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:25.995 17:33:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.995 17:33:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:25.995 17:33:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:25.995 17:33:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:25.995 17:33:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.995 17:33:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.995 17:33:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.995 17:33:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:25.995 17:33:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:25.995 17:33:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:25.995 17:33:34 -- common/autotest_common.sh@10 -- # set +x 00:23:32.587 17:33:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:32.587 17:33:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:32.587 17:33:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:32.587 17:33:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:32.587 17:33:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:32.587 17:33:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:32.587 17:33:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:32.587 17:33:40 -- nvmf/common.sh@294 -- # net_devs=() 00:23:32.587 17:33:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:32.587 17:33:40 -- nvmf/common.sh@295 -- # e810=() 00:23:32.587 17:33:40 -- nvmf/common.sh@295 -- # local -ga e810 00:23:32.587 17:33:40 -- nvmf/common.sh@296 -- # x722=() 00:23:32.587 17:33:40 -- nvmf/common.sh@296 -- # local -ga x722 00:23:32.587 17:33:40 -- nvmf/common.sh@297 -- # mlx=() 00:23:32.587 17:33:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:32.587 17:33:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.587 17:33:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:32.587 17:33:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:32.587 17:33:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:32.587 17:33:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:33:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:32.587 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:32.587 17:33:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:33:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:32.587 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:32.587 17:33:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:32.587 17:33:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:33:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.587 17:33:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.587 17:33:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.587 17:33:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:32.587 Found net devices under 0000:31:00.0: cvl_0_0 00:23:32.587 17:33:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.587 17:33:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:33:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.587 17:33:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.587 17:33:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.587 17:33:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:32.587 Found net devices under 0000:31:00.1: cvl_0_1 00:23:32.587 17:33:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.587 17:33:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:32.587 17:33:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:32.587 17:33:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:32.587 17:33:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:32.587 17:33:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.587 17:33:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.587 17:33:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.587 17:33:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:32.587 17:33:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.587 17:33:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.587 17:33:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:32.587 17:33:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.587 17:33:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.587 17:33:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:32.587 17:33:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:32.587 17:33:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.587 17:33:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.587 17:33:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.587 17:33:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.587 17:33:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:32.587 17:33:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.849 17:33:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.849 17:33:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.849 17:33:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:32.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:23:32.849 00:23:32.849 --- 10.0.0.2 ping statistics --- 00:23:32.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.849 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:23:32.849 17:33:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:23:32.849 00:23:32.849 --- 10.0.0.1 ping statistics --- 00:23:32.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.849 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:32.849 17:33:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.849 17:33:41 -- nvmf/common.sh@410 -- # return 0 00:23:32.849 17:33:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:32.849 17:33:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.849 17:33:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:32.849 17:33:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:32.849 17:33:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.849 17:33:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:32.849 17:33:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:32.849 17:33:41 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3258856 00:23:32.849 17:33:41 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:32.849 17:33:41 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:32.849 17:33:41 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3258856 00:23:32.849 17:33:41 -- common/autotest_common.sh@819 -- # '[' -z 3258856 ']' 00:23:32.849 17:33:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.849 17:33:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:32.849 17:33:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.849 17:33:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:32.849 17:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:33.792 17:33:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:33.792 17:33:42 -- common/autotest_common.sh@852 -- # return 0 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.792 17:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.792 17:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:33.792 17:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:33.792 17:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.792 17:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:33.792 Malloc0 00:23:33.792 17:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.792 17:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.792 17:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:33.792 17:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.792 17:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.792 17:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:33.792 17:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.792 17:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.792 17:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:33.792 17:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:33.792 17:33:42 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:05.914 Fuzzing completed. Shutting down the fuzz application 00:24:05.914 00:24:05.914 Dumping successful admin opcodes: 00:24:05.914 8, 9, 10, 24, 00:24:05.914 Dumping successful io opcodes: 00:24:05.914 0, 9, 00:24:05.914 NS: 0x200003aeff00 I/O qp, Total commands completed: 954127, total successful commands: 5578, random_seed: 157859840 00:24:05.914 NS: 0x200003aeff00 admin qp, Total commands completed: 120635, total successful commands: 989, random_seed: 1660336576 00:24:05.914 17:34:12 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:05.914 Fuzzing completed. Shutting down the fuzz application 00:24:05.914 00:24:05.914 Dumping successful admin opcodes: 00:24:05.914 24, 00:24:05.914 Dumping successful io opcodes: 00:24:05.914 00:24:05.914 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2866655147 00:24:05.914 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2866730445 00:24:05.914 17:34:13 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.914 17:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:05.914 17:34:13 -- common/autotest_common.sh@10 -- # set +x 00:24:05.914 17:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:05.914 17:34:13 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:05.914 17:34:13 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:05.914 17:34:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:05.914 17:34:13 -- nvmf/common.sh@116 -- # sync 00:24:05.914 17:34:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:05.914 17:34:13 -- nvmf/common.sh@119 -- # set +e 00:24:05.914 17:34:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:05.914 17:34:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:05.914 rmmod nvme_tcp 00:24:05.914 rmmod nvme_fabrics 00:24:05.914 rmmod nvme_keyring 00:24:05.914 17:34:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:05.914 17:34:13 -- nvmf/common.sh@123 -- # set -e 00:24:05.914 17:34:13 -- nvmf/common.sh@124 -- # return 0 00:24:05.914 17:34:13 -- nvmf/common.sh@477 -- # '[' -n 3258856 ']' 00:24:05.914 17:34:13 -- nvmf/common.sh@478 -- # killprocess 3258856 00:24:05.914 17:34:13 -- common/autotest_common.sh@926 -- # '[' -z 3258856 ']' 00:24:05.914 17:34:13 -- common/autotest_common.sh@930 -- # kill -0 3258856 00:24:05.914 17:34:13 -- common/autotest_common.sh@931 -- # uname 00:24:05.914 17:34:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:05.914 17:34:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3258856 00:24:05.914 17:34:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:05.914 17:34:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:05.914 17:34:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3258856' 00:24:05.914 killing process with pid 3258856 00:24:05.914 17:34:13 -- common/autotest_common.sh@945 -- # kill 3258856 00:24:05.914 17:34:13 -- common/autotest_common.sh@950 -- # wait 3258856 00:24:05.914 17:34:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:05.914 17:34:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:05.914 17:34:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:05.914 17:34:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.914 17:34:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:05.914 17:34:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.914 17:34:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.914 17:34:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.971 17:34:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:07.971 17:34:16 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:07.971 00:24:07.971 real 0m42.015s 00:24:07.971 user 0m55.468s 00:24:07.971 sys 0m15.833s 00:24:07.971 17:34:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.971 17:34:16 -- common/autotest_common.sh@10 -- # set +x 00:24:07.971 ************************************ 00:24:07.971 END TEST nvmf_fuzz 00:24:07.971 ************************************ 00:24:07.971 17:34:16 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:07.971 17:34:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:07.971 17:34:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:07.971 17:34:16 -- common/autotest_common.sh@10 -- # set +x 00:24:07.971 ************************************ 00:24:07.971 START TEST nvmf_multiconnection 00:24:07.971 ************************************ 00:24:07.971 17:34:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:07.971 * Looking for test storage... 00:24:07.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:07.971 17:34:16 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.971 17:34:16 -- nvmf/common.sh@7 -- # uname -s 00:24:07.971 17:34:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.971 17:34:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.971 17:34:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.971 17:34:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.971 17:34:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.971 17:34:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.971 17:34:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.971 17:34:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.971 17:34:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.971 17:34:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.971 17:34:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:07.971 17:34:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:07.971 17:34:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.971 17:34:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.971 17:34:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.971 17:34:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.971 17:34:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.971 17:34:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.971 17:34:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.971 17:34:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.971 17:34:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.971 17:34:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.971 17:34:16 -- paths/export.sh@5 -- # export PATH 00:24:07.971 17:34:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.971 17:34:16 -- nvmf/common.sh@46 -- # : 0 00:24:07.971 17:34:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:07.971 17:34:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:07.971 17:34:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:07.971 17:34:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.971 17:34:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.971 17:34:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:07.971 17:34:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:07.971 17:34:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:07.971 17:34:16 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:07.971 17:34:16 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:07.971 17:34:16 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:07.971 17:34:16 -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:07.971 17:34:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:07.971 17:34:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.971 17:34:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:07.971 17:34:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:07.971 17:34:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:07.971 17:34:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.971 17:34:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.971 17:34:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.971 17:34:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:07.971 17:34:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:07.971 17:34:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:07.971 17:34:16 -- common/autotest_common.sh@10 -- # set +x 00:24:16.108 17:34:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:16.108 17:34:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:16.108 17:34:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:16.108 17:34:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:16.108 17:34:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:16.108 17:34:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:16.108 17:34:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:16.108 17:34:23 -- nvmf/common.sh@294 -- # net_devs=() 00:24:16.108 17:34:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:16.108 17:34:23 -- nvmf/common.sh@295 -- # e810=() 00:24:16.108 17:34:23 -- nvmf/common.sh@295 -- # local -ga e810 00:24:16.108 17:34:23 -- nvmf/common.sh@296 -- # x722=() 00:24:16.108 17:34:23 -- nvmf/common.sh@296 -- # local -ga x722 00:24:16.108 17:34:23 -- nvmf/common.sh@297 -- # mlx=() 00:24:16.108 17:34:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:16.108 17:34:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.108 17:34:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:16.108 17:34:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:16.108 17:34:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:16.108 17:34:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.108 17:34:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:16.108 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:16.108 17:34:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.108 17:34:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:16.108 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:16.108 17:34:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:16.108 17:34:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.108 17:34:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.108 17:34:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.108 17:34:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.108 17:34:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:16.108 Found net devices under 0000:31:00.0: cvl_0_0 00:24:16.108 17:34:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.108 17:34:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.108 17:34:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.108 17:34:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.108 17:34:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.108 17:34:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:16.108 Found net devices under 0000:31:00.1: cvl_0_1 00:24:16.108 17:34:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.108 17:34:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:16.108 17:34:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:16.108 17:34:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:16.108 17:34:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:16.108 17:34:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.108 17:34:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.108 17:34:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.108 17:34:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:16.108 17:34:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.108 17:34:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.108 17:34:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:16.109 17:34:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.109 17:34:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.109 17:34:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:16.109 17:34:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:16.109 17:34:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.109 17:34:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.109 17:34:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.109 17:34:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.109 17:34:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:16.109 17:34:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.109 17:34:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.109 17:34:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.109 17:34:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:16.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:24:16.109 00:24:16.109 --- 10.0.0.2 ping statistics --- 00:24:16.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.109 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:24:16.109 17:34:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:16.109 00:24:16.109 --- 10.0.0.1 ping statistics --- 00:24:16.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.109 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:16.109 17:34:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.109 17:34:23 -- nvmf/common.sh@410 -- # return 0 00:24:16.109 17:34:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:16.109 17:34:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.109 17:34:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:16.109 17:34:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:16.109 17:34:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.109 17:34:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:16.109 17:34:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:16.109 17:34:23 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:16.109 17:34:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:16.109 17:34:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:16.109 17:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 17:34:23 -- nvmf/common.sh@469 -- # nvmfpid=3269387 00:24:16.109 17:34:23 -- nvmf/common.sh@470 -- # waitforlisten 3269387 00:24:16.109 17:34:23 -- common/autotest_common.sh@819 -- # '[' -z 3269387 ']' 00:24:16.109 17:34:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.109 17:34:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:16.109 17:34:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.109 17:34:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:16.109 17:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 17:34:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.109 [2024-10-13 17:34:23.792283] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:16.109 [2024-10-13 17:34:23.792343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.109 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.109 [2024-10-13 17:34:23.864822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.109 [2024-10-13 17:34:23.903777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:16.109 [2024-10-13 17:34:23.903929] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.109 [2024-10-13 17:34:23.903942] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.109 [2024-10-13 17:34:23.903951] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.109 [2024-10-13 17:34:23.904103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.109 [2024-10-13 17:34:23.904224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.109 [2024-10-13 17:34:23.904384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.109 [2024-10-13 17:34:23.904385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.109 17:34:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:16.109 17:34:24 -- common/autotest_common.sh@852 -- # return 0 00:24:16.109 17:34:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:16.109 17:34:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:16.109 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 17:34:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.109 17:34:24 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.109 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.109 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 [2024-10-13 17:34:24.624344] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.370 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.370 17:34:24 -- target/multiconnection.sh@21 -- # seq 1 11 00:24:16.370 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.370 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:16.370 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.370 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 Malloc1 00:24:16.370 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.370 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:16.370 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.370 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.370 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:16.370 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.370 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.370 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.370 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.370 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 [2024-10-13 17:34:24.675757] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.370 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.370 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.370 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:16.370 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.370 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 Malloc2 00:24:16.370 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.370 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:16.370 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.370 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.371 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 Malloc3 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.371 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 Malloc4 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.371 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 Malloc5 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.371 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 Malloc6 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.371 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 Malloc7 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.371 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.371 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:16.371 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.371 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.633 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 Malloc8 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.633 17:34:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 Malloc9 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:16.633 17:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.633 17:34:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 Malloc10 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.633 17:34:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 Malloc11 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:16.633 17:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.633 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 17:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.633 17:34:25 -- target/multiconnection.sh@28 -- # seq 1 11 00:24:16.633 17:34:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.633 17:34:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:18.548 17:34:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:18.548 17:34:26 -- common/autotest_common.sh@1177 -- # local i=0 00:24:18.548 17:34:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.548 17:34:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:18.548 17:34:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:20.459 17:34:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:20.459 17:34:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:20.459 17:34:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:24:20.459 17:34:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:20.459 17:34:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.459 17:34:28 -- common/autotest_common.sh@1187 -- # return 0 00:24:20.460 17:34:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.460 17:34:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:21.844 17:34:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:21.844 17:34:30 -- common/autotest_common.sh@1177 -- # local i=0 00:24:21.844 17:34:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.844 17:34:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:21.844 17:34:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:23.758 17:34:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:23.758 17:34:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:23.758 17:34:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:24:23.758 17:34:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:23.758 17:34:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.758 17:34:32 -- common/autotest_common.sh@1187 -- # return 0 00:24:23.758 17:34:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.758 17:34:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:25.668 17:34:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:25.668 17:34:33 -- common/autotest_common.sh@1177 -- # local i=0 00:24:25.668 17:34:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.669 17:34:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:25.669 17:34:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:27.579 17:34:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:27.579 17:34:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:27.580 17:34:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:24:27.580 17:34:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:27.580 17:34:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.580 17:34:35 -- common/autotest_common.sh@1187 -- # return 0 00:24:27.580 17:34:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.580 17:34:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:29.490 17:34:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:29.490 17:34:37 -- common/autotest_common.sh@1177 -- # local i=0 00:24:29.490 17:34:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.490 17:34:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:29.490 17:34:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:31.402 17:34:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:31.402 17:34:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:31.402 17:34:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:31.402 17:34:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:31.402 17:34:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.402 17:34:39 -- common/autotest_common.sh@1187 -- # return 0 00:24:31.402 17:34:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.402 17:34:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:32.786 17:34:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:32.786 17:34:41 -- common/autotest_common.sh@1177 -- # local i=0 00:24:32.786 17:34:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.786 17:34:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:32.786 17:34:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:34.698 17:34:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:34.698 17:34:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:34.698 17:34:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:34.698 17:34:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:34.698 17:34:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.698 17:34:43 -- common/autotest_common.sh@1187 -- # return 0 00:24:34.698 17:34:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.698 17:34:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:36.612 17:34:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:36.612 17:34:44 -- common/autotest_common.sh@1177 -- # local i=0 00:24:36.612 17:34:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.612 17:34:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:36.612 17:34:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:38.521 17:34:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:38.521 17:34:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:38.521 17:34:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:38.521 17:34:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:38.521 17:34:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.521 17:34:46 -- common/autotest_common.sh@1187 -- # return 0 00:24:38.521 17:34:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.521 17:34:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:40.429 17:34:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:40.429 17:34:48 -- common/autotest_common.sh@1177 -- # local i=0 00:24:40.429 17:34:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.429 17:34:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:40.429 17:34:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:42.339 17:34:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:42.339 17:34:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:42.339 17:34:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:42.339 17:34:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:42.339 17:34:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.339 17:34:50 -- common/autotest_common.sh@1187 -- # return 0 00:24:42.339 17:34:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.339 17:34:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:44.253 17:34:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:44.253 17:34:52 -- common/autotest_common.sh@1177 -- # local i=0 00:24:44.253 17:34:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.253 17:34:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:44.253 17:34:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:46.167 17:34:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:46.167 17:34:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:46.167 17:34:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:46.167 17:34:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:46.167 17:34:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.167 17:34:54 -- common/autotest_common.sh@1187 -- # return 0 00:24:46.167 17:34:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.167 17:34:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:48.078 17:34:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:48.078 17:34:56 -- common/autotest_common.sh@1177 -- # local i=0 00:24:48.078 17:34:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.078 17:34:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:48.078 17:34:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:49.986 17:34:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:49.986 17:34:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:49.986 17:34:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:49.986 17:34:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:49.986 17:34:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.986 17:34:58 -- common/autotest_common.sh@1187 -- # return 0 00:24:49.986 17:34:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.986 17:34:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:51.897 17:35:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:51.897 17:35:00 -- common/autotest_common.sh@1177 -- # local i=0 00:24:51.897 17:35:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.897 17:35:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:51.897 17:35:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:53.807 17:35:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:53.807 17:35:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:53.807 17:35:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:53.807 17:35:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:53.807 17:35:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.807 17:35:02 -- common/autotest_common.sh@1187 -- # return 0 00:24:53.807 17:35:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.807 17:35:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:55.719 17:35:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:55.719 17:35:04 -- common/autotest_common.sh@1177 -- # local i=0 00:24:55.719 17:35:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.719 17:35:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:55.719 17:35:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:57.633 17:35:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:57.633 17:35:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:57.633 17:35:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:57.633 17:35:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:57.633 17:35:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.633 17:35:06 -- common/autotest_common.sh@1187 -- # return 0 00:24:57.633 17:35:06 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:57.893 [global] 00:24:57.893 thread=1 00:24:57.893 invalidate=1 00:24:57.893 rw=read 00:24:57.893 time_based=1 00:24:57.893 runtime=10 00:24:57.893 ioengine=libaio 00:24:57.893 direct=1 00:24:57.893 bs=262144 00:24:57.893 iodepth=64 00:24:57.893 norandommap=1 00:24:57.893 numjobs=1 00:24:57.893 00:24:57.893 [job0] 00:24:57.893 filename=/dev/nvme0n1 00:24:57.893 [job1] 00:24:57.893 filename=/dev/nvme10n1 00:24:57.893 [job2] 00:24:57.893 filename=/dev/nvme1n1 00:24:57.893 [job3] 00:24:57.893 filename=/dev/nvme2n1 00:24:57.893 [job4] 00:24:57.893 filename=/dev/nvme3n1 00:24:57.893 [job5] 00:24:57.893 filename=/dev/nvme4n1 00:24:57.893 [job6] 00:24:57.893 filename=/dev/nvme5n1 00:24:57.893 [job7] 00:24:57.893 filename=/dev/nvme6n1 00:24:57.893 [job8] 00:24:57.893 filename=/dev/nvme7n1 00:24:57.893 [job9] 00:24:57.893 filename=/dev/nvme8n1 00:24:57.893 [job10] 00:24:57.893 filename=/dev/nvme9n1 00:24:57.893 Could not set queue depth (nvme0n1) 00:24:57.893 Could not set queue depth (nvme10n1) 00:24:57.893 Could not set queue depth (nvme1n1) 00:24:57.893 Could not set queue depth (nvme2n1) 00:24:57.893 Could not set queue depth (nvme3n1) 00:24:57.893 Could not set queue depth (nvme4n1) 00:24:57.893 Could not set queue depth (nvme5n1) 00:24:57.893 Could not set queue depth (nvme6n1) 00:24:57.893 Could not set queue depth (nvme7n1) 00:24:57.893 Could not set queue depth (nvme8n1) 00:24:57.893 Could not set queue depth (nvme9n1) 00:24:58.462 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.462 fio-3.35 00:24:58.462 Starting 11 threads 00:25:10.723 00:25:10.723 job0: (groupid=0, jobs=1): err= 0: pid=3277907: Sun Oct 13 17:35:17 2024 00:25:10.723 read: IOPS=688, BW=172MiB/s (181MB/s)(1729MiB/10040msec) 00:25:10.723 slat (usec): min=5, max=102275, avg=1314.25, stdev=4691.33 00:25:10.723 clat (usec): min=1920, max=234657, avg=91522.46, stdev=44961.72 00:25:10.723 lat (usec): min=1975, max=262761, avg=92836.71, stdev=45770.24 00:25:10.723 clat percentiles (msec): 00:25:10.723 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 32], 00:25:10.723 | 30.00th=[ 79], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 115], 00:25:10.723 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 136], 95.00th=[ 148], 00:25:10.723 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 203], 99.95th=[ 207], 00:25:10.723 | 99.99th=[ 234] 00:25:10.723 bw ( KiB/s): min=111104, max=319488, per=7.69%, avg=175411.20, stdev=58767.39, samples=20 00:25:10.723 iops : min= 434, max= 1248, avg=685.20, stdev=229.56, samples=20 00:25:10.723 lat (msec) : 2=0.01%, 4=1.11%, 10=2.99%, 20=6.75%, 50=13.90% 00:25:10.723 lat (msec) : 100=18.93%, 250=56.30% 00:25:10.723 cpu : usr=0.21%, sys=2.08%, ctx=1580, majf=0, minf=4097 00:25:10.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:10.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.723 issued rwts: total=6915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.723 job1: (groupid=0, jobs=1): err= 0: pid=3277908: Sun Oct 13 17:35:17 2024 00:25:10.723 read: IOPS=1064, BW=266MiB/s (279MB/s)(2672MiB/10036msec) 00:25:10.723 slat (usec): min=5, max=88079, avg=779.16, stdev=2930.42 00:25:10.723 clat (usec): min=1389, max=159148, avg=59268.62, stdev=33769.61 00:25:10.723 lat (usec): min=1435, max=197059, avg=60047.78, stdev=34171.50 00:25:10.723 clat percentiles (msec): 00:25:10.723 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 28], 00:25:10.723 | 30.00th=[ 38], 40.00th=[ 44], 50.00th=[ 57], 60.00th=[ 68], 00:25:10.723 | 70.00th=[ 80], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 115], 00:25:10.723 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 153], 00:25:10.723 | 99.99th=[ 159] 00:25:10.723 bw ( KiB/s): min=158208, max=418816, per=11.93%, avg=272039.25, stdev=80223.77, samples=20 00:25:10.723 iops : min= 618, max= 1636, avg=1062.65, stdev=313.37, samples=20 00:25:10.723 lat (msec) : 2=0.16%, 4=0.97%, 10=5.28%, 20=7.73%, 50=31.86% 00:25:10.723 lat (msec) : 100=39.46%, 250=14.54% 00:25:10.723 cpu : usr=0.28%, sys=3.48%, ctx=2227, majf=0, minf=4097 00:25:10.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:10.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.723 issued rwts: total=10688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.723 job2: (groupid=0, jobs=1): err= 0: pid=3277909: Sun Oct 13 17:35:17 2024 00:25:10.723 read: IOPS=855, BW=214MiB/s (224MB/s)(2162MiB/10103msec) 00:25:10.723 slat (usec): min=6, max=147325, avg=948.31, stdev=3699.45 00:25:10.723 clat (usec): min=1793, max=209453, avg=73728.28, stdev=33751.82 00:25:10.723 lat (usec): min=1843, max=323622, avg=74676.59, stdev=34248.31 00:25:10.723 clat percentiles (msec): 00:25:10.723 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 45], 00:25:10.723 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 75], 60.00th=[ 84], 00:25:10.723 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 113], 95.00th=[ 127], 00:25:10.723 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 203], 99.95th=[ 203], 00:25:10.723 | 99.99th=[ 209] 00:25:10.723 bw ( KiB/s): min=127488, max=403968, per=9.64%, avg=219724.80, stdev=76086.76, samples=20 00:25:10.723 iops : min= 498, max= 1578, avg=858.30, stdev=297.21, samples=20 00:25:10.723 lat (msec) : 2=0.01%, 4=0.68%, 10=2.07%, 20=4.57%, 50=16.27% 00:25:10.723 lat (msec) : 100=55.66%, 250=20.74% 00:25:10.723 cpu : usr=0.35%, sys=2.59%, ctx=1999, majf=0, minf=3536 00:25:10.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:10.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.723 issued rwts: total=8647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.723 job3: (groupid=0, jobs=1): err= 0: pid=3277910: Sun Oct 13 17:35:17 2024 00:25:10.723 read: IOPS=848, BW=212MiB/s (222MB/s)(2137MiB/10081msec) 00:25:10.723 slat (usec): min=5, max=133492, avg=997.77, stdev=4187.19 00:25:10.723 clat (usec): min=1465, max=229198, avg=74356.44, stdev=50882.25 00:25:10.723 lat (usec): min=1480, max=229248, avg=75354.21, stdev=51682.13 00:25:10.723 clat percentiles (msec): 00:25:10.723 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 19], 00:25:10.723 | 30.00th=[ 27], 40.00th=[ 40], 50.00th=[ 89], 60.00th=[ 105], 00:25:10.724 | 70.00th=[ 116], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 146], 00:25:10.724 | 99.00th=[ 161], 99.50th=[ 218], 99.90th=[ 226], 99.95th=[ 228], 00:25:10.724 | 99.99th=[ 230] 00:25:10.724 bw ( KiB/s): min=111616, max=509440, per=9.53%, avg=217274.00, stdev=119556.77, samples=20 00:25:10.724 iops : min= 436, max= 1990, avg=848.70, stdev=466.99, samples=20 00:25:10.724 lat (msec) : 2=0.01%, 4=1.99%, 10=6.20%, 20=14.01%, 50=20.53% 00:25:10.724 lat (msec) : 100=14.18%, 250=43.08% 00:25:10.724 cpu : usr=0.24%, sys=2.69%, ctx=2031, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=8549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job4: (groupid=0, jobs=1): err= 0: pid=3277915: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=979, BW=245MiB/s (257MB/s)(2471MiB/10090msec) 00:25:10.724 slat (usec): min=6, max=110720, avg=832.03, stdev=3563.42 00:25:10.724 clat (usec): min=1719, max=270960, avg=64415.37, stdev=40458.77 00:25:10.724 lat (usec): min=1765, max=270987, avg=65247.41, stdev=40947.76 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 32], 00:25:10.724 | 30.00th=[ 36], 40.00th=[ 43], 50.00th=[ 49], 60.00th=[ 79], 00:25:10.724 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 116], 95.00th=[ 138], 00:25:10.724 | 99.00th=[ 171], 99.50th=[ 182], 99.90th=[ 247], 99.95th=[ 257], 00:25:10.724 | 99.99th=[ 271] 00:25:10.724 bw ( KiB/s): min=99328, max=469504, per=11.02%, avg=251381.50, stdev=124498.87, samples=20 00:25:10.724 iops : min= 388, max= 1834, avg=981.95, stdev=486.33, samples=20 00:25:10.724 lat (msec) : 2=0.05%, 4=1.14%, 10=3.74%, 20=5.62%, 50=40.95% 00:25:10.724 lat (msec) : 100=30.40%, 250=18.00%, 500=0.09% 00:25:10.724 cpu : usr=0.39%, sys=3.34%, ctx=2123, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=9882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job5: (groupid=0, jobs=1): err= 0: pid=3277928: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=1097, BW=274MiB/s (288MB/s)(2756MiB/10041msec) 00:25:10.724 slat (usec): min=5, max=58710, avg=761.09, stdev=2803.88 00:25:10.724 clat (usec): min=1410, max=174344, avg=57448.02, stdev=34634.18 00:25:10.724 lat (usec): min=1522, max=174380, avg=58209.11, stdev=35117.72 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 26], 00:25:10.724 | 30.00th=[ 32], 40.00th=[ 45], 50.00th=[ 52], 60.00th=[ 60], 00:25:10.724 | 70.00th=[ 75], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 123], 00:25:10.724 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 171], 00:25:10.724 | 99.99th=[ 176] 00:25:10.724 bw ( KiB/s): min=126976, max=448512, per=12.31%, avg=280622.90, stdev=104790.09, samples=20 00:25:10.724 iops : min= 496, max= 1752, avg=1096.15, stdev=409.36, samples=20 00:25:10.724 lat (msec) : 2=0.13%, 4=0.86%, 10=2.65%, 20=9.85%, 50=35.20% 00:25:10.724 lat (msec) : 100=37.69%, 250=13.62% 00:25:10.724 cpu : usr=0.44%, sys=3.16%, ctx=2438, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=11024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job6: (groupid=0, jobs=1): err= 0: pid=3277934: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=681, BW=170MiB/s (179MB/s)(1720MiB/10095msec) 00:25:10.724 slat (usec): min=6, max=95929, avg=1315.60, stdev=4450.29 00:25:10.724 clat (msec): min=2, max=254, avg=92.49, stdev=41.49 00:25:10.724 lat (msec): min=2, max=257, avg=93.80, stdev=42.18 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 48], 00:25:10.724 | 30.00th=[ 71], 40.00th=[ 91], 50.00th=[ 103], 60.00th=[ 112], 00:25:10.724 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 136], 95.00th=[ 148], 00:25:10.724 | 99.00th=[ 180], 99.50th=[ 199], 99.90th=[ 251], 99.95th=[ 251], 00:25:10.724 | 99.99th=[ 255] 00:25:10.724 bw ( KiB/s): min=113664, max=410112, per=7.65%, avg=174464.00, stdev=70841.82, samples=20 00:25:10.724 iops : min= 444, max= 1602, avg=681.50, stdev=276.73, samples=20 00:25:10.724 lat (msec) : 4=0.73%, 10=2.46%, 20=3.04%, 50=15.86%, 100=26.05% 00:25:10.724 lat (msec) : 250=51.80%, 500=0.06% 00:25:10.724 cpu : usr=0.32%, sys=2.05%, ctx=1553, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=6878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job7: (groupid=0, jobs=1): err= 0: pid=3277940: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=842, BW=211MiB/s (221MB/s)(2115MiB/10042msec) 00:25:10.724 slat (usec): min=5, max=93711, avg=1048.77, stdev=3336.02 00:25:10.724 clat (usec): min=1682, max=155118, avg=74832.73, stdev=30652.10 00:25:10.724 lat (usec): min=1889, max=231144, avg=75881.49, stdev=31045.99 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 51], 00:25:10.724 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 86], 00:25:10.724 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 120], 00:25:10.724 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 153], 00:25:10.724 | 99.99th=[ 155] 00:25:10.724 bw ( KiB/s): min=152064, max=339968, per=9.43%, avg=214937.60, stdev=64374.68, samples=20 00:25:10.724 iops : min= 594, max= 1328, avg=839.60, stdev=251.46, samples=20 00:25:10.724 lat (msec) : 2=0.05%, 4=0.63%, 10=1.49%, 20=4.43%, 50=13.03% 00:25:10.724 lat (msec) : 100=57.17%, 250=23.21% 00:25:10.724 cpu : usr=0.28%, sys=2.50%, ctx=1797, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=8459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job8: (groupid=0, jobs=1): err= 0: pid=3277956: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=773, BW=193MiB/s (203MB/s)(1941MiB/10039msec) 00:25:10.724 slat (usec): min=6, max=63112, avg=1239.40, stdev=3629.38 00:25:10.724 clat (usec): min=1923, max=183935, avg=81426.52, stdev=28447.56 00:25:10.724 lat (usec): min=1971, max=183960, avg=82665.92, stdev=28778.72 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 10], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:25:10.724 | 30.00th=[ 66], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 90], 00:25:10.724 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 125], 00:25:10.724 | 99.00th=[ 150], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 184], 00:25:10.724 | 99.99th=[ 184] 00:25:10.724 bw ( KiB/s): min=133632, max=320000, per=8.64%, avg=197120.00, stdev=52176.95, samples=20 00:25:10.724 iops : min= 522, max= 1250, avg=770.00, stdev=203.82, samples=20 00:25:10.724 lat (msec) : 2=0.01%, 4=0.27%, 10=0.77%, 20=1.35%, 50=12.51% 00:25:10.724 lat (msec) : 100=57.17%, 250=27.91% 00:25:10.724 cpu : usr=0.29%, sys=2.58%, ctx=1473, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=7763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job9: (groupid=0, jobs=1): err= 0: pid=3277965: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=549, BW=137MiB/s (144MB/s)(1387MiB/10088msec) 00:25:10.724 slat (usec): min=6, max=69169, avg=1772.48, stdev=4612.40 00:25:10.724 clat (msec): min=24, max=275, avg=114.47, stdev=21.99 00:25:10.724 lat (msec): min=25, max=275, avg=116.24, stdev=22.48 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 68], 5.00th=[ 81], 10.00th=[ 91], 20.00th=[ 97], 00:25:10.724 | 30.00th=[ 105], 40.00th=[ 110], 50.00th=[ 115], 60.00th=[ 120], 00:25:10.724 | 70.00th=[ 124], 80.00th=[ 129], 90.00th=[ 140], 95.00th=[ 150], 00:25:10.724 | 99.00th=[ 176], 99.50th=[ 199], 99.90th=[ 228], 99.95th=[ 228], 00:25:10.724 | 99.99th=[ 275] 00:25:10.724 bw ( KiB/s): min=93696, max=181760, per=6.16%, avg=140390.40, stdev=22309.90, samples=20 00:25:10.724 iops : min= 366, max= 710, avg=548.40, stdev=87.15, samples=20 00:25:10.724 lat (msec) : 50=0.81%, 100=23.59%, 250=75.56%, 500=0.04% 00:25:10.724 cpu : usr=0.28%, sys=2.04%, ctx=1270, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=5548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 job10: (groupid=0, jobs=1): err= 0: pid=3277970: Sun Oct 13 17:35:17 2024 00:25:10.724 read: IOPS=561, BW=140MiB/s (147MB/s)(1409MiB/10042msec) 00:25:10.724 slat (usec): min=8, max=86848, avg=1740.50, stdev=4905.86 00:25:10.724 clat (msec): min=4, max=243, avg=112.14, stdev=26.74 00:25:10.724 lat (msec): min=4, max=243, avg=113.88, stdev=27.31 00:25:10.724 clat percentiles (msec): 00:25:10.724 | 1.00th=[ 21], 5.00th=[ 60], 10.00th=[ 83], 20.00th=[ 96], 00:25:10.724 | 30.00th=[ 105], 40.00th=[ 111], 50.00th=[ 117], 60.00th=[ 122], 00:25:10.724 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 138], 95.00th=[ 148], 00:25:10.724 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 209], 99.95th=[ 215], 00:25:10.724 | 99.99th=[ 243] 00:25:10.724 bw ( KiB/s): min=109056, max=225792, per=6.26%, avg=142668.80, stdev=25102.17, samples=20 00:25:10.724 iops : min= 426, max= 882, avg=557.30, stdev=98.06, samples=20 00:25:10.724 lat (msec) : 10=0.12%, 20=0.87%, 50=2.20%, 100=21.27%, 250=75.53% 00:25:10.724 cpu : usr=0.25%, sys=2.11%, ctx=1278, majf=0, minf=4097 00:25:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.724 issued rwts: total=5636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.724 00:25:10.724 Run status group 0 (all jobs): 00:25:10.724 READ: bw=2227MiB/s (2335MB/s), 137MiB/s-274MiB/s (144MB/s-288MB/s), io=22.0GiB (23.6GB), run=10036-10103msec 00:25:10.724 00:25:10.724 Disk stats (read/write): 00:25:10.724 nvme0n1: ios=13221/0, merge=0/0, ticks=1214980/0, in_queue=1214980, util=96.29% 00:25:10.725 nvme10n1: ios=20923/0, merge=0/0, ticks=1224871/0, in_queue=1224871, util=96.56% 00:25:10.725 nvme1n1: ios=17166/0, merge=0/0, ticks=1237524/0, in_queue=1237524, util=97.11% 00:25:10.725 nvme2n1: ios=16845/0, merge=0/0, ticks=1220117/0, in_queue=1220117, util=97.23% 00:25:10.725 nvme3n1: ios=19546/0, merge=0/0, ticks=1217091/0, in_queue=1217091, util=97.34% 00:25:10.725 nvme4n1: ios=21295/0, merge=0/0, ticks=1222318/0, in_queue=1222318, util=97.79% 00:25:10.725 nvme5n1: ios=13520/0, merge=0/0, ticks=1207849/0, in_queue=1207849, util=98.03% 00:25:10.725 nvme6n1: ios=16400/0, merge=0/0, ticks=1223685/0, in_queue=1223685, util=98.18% 00:25:10.725 nvme7n1: ios=15104/0, merge=0/0, ticks=1220832/0, in_queue=1220832, util=98.74% 00:25:10.725 nvme8n1: ios=10856/0, merge=0/0, ticks=1205108/0, in_queue=1205108, util=98.97% 00:25:10.725 nvme9n1: ios=10830/0, merge=0/0, ticks=1209486/0, in_queue=1209486, util=99.24% 00:25:10.725 17:35:17 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:10.725 [global] 00:25:10.725 thread=1 00:25:10.725 invalidate=1 00:25:10.725 rw=randwrite 00:25:10.725 time_based=1 00:25:10.725 runtime=10 00:25:10.725 ioengine=libaio 00:25:10.725 direct=1 00:25:10.725 bs=262144 00:25:10.725 iodepth=64 00:25:10.725 norandommap=1 00:25:10.725 numjobs=1 00:25:10.725 00:25:10.725 [job0] 00:25:10.725 filename=/dev/nvme0n1 00:25:10.725 [job1] 00:25:10.725 filename=/dev/nvme10n1 00:25:10.725 [job2] 00:25:10.725 filename=/dev/nvme1n1 00:25:10.725 [job3] 00:25:10.725 filename=/dev/nvme2n1 00:25:10.725 [job4] 00:25:10.725 filename=/dev/nvme3n1 00:25:10.725 [job5] 00:25:10.725 filename=/dev/nvme4n1 00:25:10.725 [job6] 00:25:10.725 filename=/dev/nvme5n1 00:25:10.725 [job7] 00:25:10.725 filename=/dev/nvme6n1 00:25:10.725 [job8] 00:25:10.725 filename=/dev/nvme7n1 00:25:10.725 [job9] 00:25:10.725 filename=/dev/nvme8n1 00:25:10.725 [job10] 00:25:10.725 filename=/dev/nvme9n1 00:25:10.725 Could not set queue depth (nvme0n1) 00:25:10.725 Could not set queue depth (nvme10n1) 00:25:10.725 Could not set queue depth (nvme1n1) 00:25:10.725 Could not set queue depth (nvme2n1) 00:25:10.725 Could not set queue depth (nvme3n1) 00:25:10.725 Could not set queue depth (nvme4n1) 00:25:10.725 Could not set queue depth (nvme5n1) 00:25:10.725 Could not set queue depth (nvme6n1) 00:25:10.725 Could not set queue depth (nvme7n1) 00:25:10.725 Could not set queue depth (nvme8n1) 00:25:10.725 Could not set queue depth (nvme9n1) 00:25:10.725 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.725 fio-3.35 00:25:10.725 Starting 11 threads 00:25:20.796 00:25:20.796 job0: (groupid=0, jobs=1): err= 0: pid=3280377: Sun Oct 13 17:35:28 2024 00:25:20.796 write: IOPS=697, BW=174MiB/s (183MB/s)(1758MiB/10082msec); 0 zone resets 00:25:20.796 slat (usec): min=18, max=62279, avg=1295.40, stdev=2704.18 00:25:20.796 clat (msec): min=2, max=183, avg=90.44, stdev=35.47 00:25:20.796 lat (msec): min=2, max=183, avg=91.74, stdev=36.02 00:25:20.796 clat percentiles (msec): 00:25:20.796 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 46], 20.00th=[ 55], 00:25:20.796 | 30.00th=[ 69], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 108], 00:25:20.796 | 70.00th=[ 114], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 134], 00:25:20.796 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:25:20.796 | 99.99th=[ 184] 00:25:20.796 bw ( KiB/s): min=112640, max=280064, per=9.51%, avg=178380.80, stdev=54551.79, samples=20 00:25:20.796 iops : min= 440, max= 1094, avg=696.80, stdev=213.09, samples=20 00:25:20.796 lat (msec) : 4=0.06%, 10=0.82%, 20=1.65%, 50=10.95%, 100=41.33% 00:25:20.796 lat (msec) : 250=45.19% 00:25:20.796 cpu : usr=1.66%, sys=2.15%, ctx=2544, majf=0, minf=1 00:25:20.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:20.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.796 issued rwts: total=0,7031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.796 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.796 job1: (groupid=0, jobs=1): err= 0: pid=3280389: Sun Oct 13 17:35:28 2024 00:25:20.796 write: IOPS=737, BW=184MiB/s (193MB/s)(1858MiB/10083msec); 0 zone resets 00:25:20.796 slat (usec): min=17, max=89964, avg=1253.47, stdev=2667.76 00:25:20.796 clat (msec): min=2, max=197, avg=85.51, stdev=21.92 00:25:20.796 lat (msec): min=3, max=214, avg=86.76, stdev=22.16 00:25:20.796 clat percentiles (msec): 00:25:20.796 | 1.00th=[ 19], 5.00th=[ 43], 10.00th=[ 61], 20.00th=[ 77], 00:25:20.796 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 88], 00:25:20.796 | 70.00th=[ 89], 80.00th=[ 104], 90.00th=[ 114], 95.00th=[ 117], 00:25:20.796 | 99.00th=[ 134], 99.50th=[ 153], 99.90th=[ 190], 99.95th=[ 192], 00:25:20.796 | 99.99th=[ 199] 00:25:20.796 bw ( KiB/s): min=141312, max=260096, per=10.06%, avg=188672.00, stdev=34538.49, samples=20 00:25:20.796 iops : min= 552, max= 1016, avg=737.00, stdev=134.92, samples=20 00:25:20.796 lat (msec) : 4=0.03%, 10=0.16%, 20=1.09%, 50=4.49%, 100=73.32% 00:25:20.796 lat (msec) : 250=20.91% 00:25:20.796 cpu : usr=1.67%, sys=2.28%, ctx=2383, majf=0, minf=1 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,7433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job2: (groupid=0, jobs=1): err= 0: pid=3280390: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=709, BW=177MiB/s (186MB/s)(1797MiB/10135msec); 0 zone resets 00:25:20.797 slat (usec): min=18, max=13478, avg=1278.50, stdev=2479.66 00:25:20.797 clat (usec): min=1734, max=277159, avg=88953.27, stdev=29020.49 00:25:20.797 lat (usec): min=1795, max=277201, avg=90231.78, stdev=29442.79 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 74], 00:25:20.797 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 105], 00:25:20.797 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 113], 95.00th=[ 116], 00:25:20.797 | 99.00th=[ 153], 99.50th=[ 192], 99.90th=[ 259], 99.95th=[ 268], 00:25:20.797 | 99.99th=[ 279] 00:25:20.797 bw ( KiB/s): min=139776, max=276992, per=9.72%, avg=182348.80, stdev=41591.52, samples=20 00:25:20.797 iops : min= 546, max= 1082, avg=712.30, stdev=162.47, samples=20 00:25:20.797 lat (msec) : 2=0.03%, 4=0.14%, 10=1.14%, 20=2.18%, 50=7.10% 00:25:20.797 lat (msec) : 100=42.53%, 250=46.74%, 500=0.14% 00:25:20.797 cpu : usr=1.66%, sys=2.04%, ctx=2538, majf=0, minf=2 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,7186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job3: (groupid=0, jobs=1): err= 0: pid=3280391: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=581, BW=145MiB/s (153MB/s)(1474MiB/10134msec); 0 zone resets 00:25:20.797 slat (usec): min=24, max=118702, avg=1573.42, stdev=3466.63 00:25:20.797 clat (msec): min=17, max=279, avg=108.35, stdev=26.48 00:25:20.797 lat (msec): min=19, max=279, avg=109.92, stdev=26.78 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 41], 5.00th=[ 69], 10.00th=[ 81], 20.00th=[ 86], 00:25:20.797 | 30.00th=[ 101], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 108], 00:25:20.797 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 136], 95.00th=[ 148], 00:25:20.797 | 99.00th=[ 194], 99.50th=[ 224], 99.90th=[ 271], 99.95th=[ 271], 00:25:20.797 | 99.99th=[ 279] 00:25:20.797 bw ( KiB/s): min=88240, max=220160, per=7.96%, avg=149333.60, stdev=30643.64, samples=20 00:25:20.797 iops : min= 344, max= 860, avg=583.30, stdev=119.77, samples=20 00:25:20.797 lat (msec) : 20=0.03%, 50=2.36%, 100=26.49%, 250=70.88%, 500=0.24% 00:25:20.797 cpu : usr=1.75%, sys=1.81%, ctx=1903, majf=0, minf=1 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,5896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job4: (groupid=0, jobs=1): err= 0: pid=3280392: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=716, BW=179MiB/s (188MB/s)(1815MiB/10135msec); 0 zone resets 00:25:20.797 slat (usec): min=24, max=11903, avg=1339.19, stdev=2454.89 00:25:20.797 clat (msec): min=8, max=279, avg=87.97, stdev=27.94 00:25:20.797 lat (msec): min=8, max=279, avg=89.31, stdev=28.28 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 64], 00:25:20.797 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 88], 00:25:20.797 | 70.00th=[ 89], 80.00th=[ 121], 90.00th=[ 129], 95.00th=[ 132], 00:25:20.797 | 99.00th=[ 153], 99.50th=[ 197], 99.90th=[ 262], 99.95th=[ 271], 00:25:20.797 | 99.99th=[ 279] 00:25:20.797 bw ( KiB/s): min=113152, max=288256, per=9.83%, avg=184243.20, stdev=50738.54, samples=20 00:25:20.797 iops : min= 442, max= 1126, avg=719.70, stdev=198.20, samples=20 00:25:20.797 lat (msec) : 10=0.06%, 20=0.11%, 50=3.31%, 100=73.65%, 250=22.69% 00:25:20.797 lat (msec) : 500=0.19% 00:25:20.797 cpu : usr=1.33%, sys=2.25%, ctx=1950, majf=0, minf=2 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,7260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job5: (groupid=0, jobs=1): err= 0: pid=3280393: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=489, BW=122MiB/s (128MB/s)(1241MiB/10135msec); 0 zone resets 00:25:20.797 slat (usec): min=23, max=58117, avg=2010.14, stdev=3571.00 00:25:20.797 clat (msec): min=60, max=278, avg=128.58, stdev=16.17 00:25:20.797 lat (msec): min=60, max=278, avg=130.59, stdev=15.98 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 94], 5.00th=[ 103], 10.00th=[ 107], 20.00th=[ 122], 00:25:20.797 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 132], 00:25:20.797 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 142], 95.00th=[ 150], 00:25:20.797 | 99.00th=[ 178], 99.50th=[ 222], 99.90th=[ 271], 99.95th=[ 271], 00:25:20.797 | 99.99th=[ 279] 00:25:20.797 bw ( KiB/s): min=104448, max=145920, per=6.69%, avg=125491.20, stdev=9620.18, samples=20 00:25:20.797 iops : min= 408, max= 570, avg=490.20, stdev=37.58, samples=20 00:25:20.797 lat (msec) : 100=4.45%, 250=95.27%, 500=0.28% 00:25:20.797 cpu : usr=1.16%, sys=1.59%, ctx=1233, majf=0, minf=1 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,4965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job6: (groupid=0, jobs=1): err= 0: pid=3280394: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=744, BW=186MiB/s (195MB/s)(1876MiB/10077msec); 0 zone resets 00:25:20.797 slat (usec): min=18, max=35810, avg=1306.21, stdev=2357.99 00:25:20.797 clat (msec): min=7, max=161, avg=84.41, stdev=18.78 00:25:20.797 lat (msec): min=8, max=161, avg=85.71, stdev=18.95 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 31], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 67], 00:25:20.797 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 87], 00:25:20.797 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 108], 00:25:20.797 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 153], 99.95th=[ 157], 00:25:20.797 | 99.99th=[ 163] 00:25:20.797 bw ( KiB/s): min=151552, max=252928, per=10.16%, avg=190438.40, stdev=32665.30, samples=20 00:25:20.797 iops : min= 592, max= 988, avg=743.90, stdev=127.60, samples=20 00:25:20.797 lat (msec) : 10=0.05%, 20=0.39%, 50=2.85%, 100=69.79%, 250=26.91% 00:25:20.797 cpu : usr=1.45%, sys=2.53%, ctx=2017, majf=0, minf=1 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,7502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job7: (groupid=0, jobs=1): err= 0: pid=3280395: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=755, BW=189MiB/s (198MB/s)(1904MiB/10081msec); 0 zone resets 00:25:20.797 slat (usec): min=23, max=10993, avg=1238.40, stdev=2277.13 00:25:20.797 clat (msec): min=3, max=168, avg=83.46, stdev=22.02 00:25:20.797 lat (msec): min=3, max=168, avg=84.70, stdev=22.28 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 12], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 70], 00:25:20.797 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 88], 00:25:20.797 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 112], 95.00th=[ 115], 00:25:20.797 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 163], 00:25:20.797 | 99.99th=[ 169] 00:25:20.797 bw ( KiB/s): min=141312, max=296448, per=10.31%, avg=193351.10, stdev=42005.94, samples=20 00:25:20.797 iops : min= 552, max= 1158, avg=755.25, stdev=164.08, samples=20 00:25:20.797 lat (msec) : 4=0.08%, 10=0.63%, 20=1.05%, 50=2.65%, 100=74.71% 00:25:20.797 lat (msec) : 250=20.88% 00:25:20.797 cpu : usr=1.71%, sys=2.29%, ctx=2292, majf=0, minf=1 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,7615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job8: (groupid=0, jobs=1): err= 0: pid=3280396: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=755, BW=189MiB/s (198MB/s)(1905MiB/10079msec); 0 zone resets 00:25:20.797 slat (usec): min=21, max=55574, avg=1203.60, stdev=2612.25 00:25:20.797 clat (msec): min=2, max=161, avg=83.40, stdev=28.07 00:25:20.797 lat (msec): min=2, max=161, avg=84.61, stdev=28.49 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 46], 20.00th=[ 61], 00:25:20.797 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 87], 00:25:20.797 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 131], 00:25:20.797 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 157], 00:25:20.797 | 99.99th=[ 161] 00:25:20.797 bw ( KiB/s): min=118784, max=351744, per=10.32%, avg=193433.60, stdev=55117.46, samples=20 00:25:20.797 iops : min= 464, max= 1374, avg=755.60, stdev=215.30, samples=20 00:25:20.797 lat (msec) : 4=0.13%, 10=0.67%, 20=1.46%, 50=10.32%, 100=53.97% 00:25:20.797 lat (msec) : 250=33.46% 00:25:20.797 cpu : usr=1.57%, sys=2.50%, ctx=2538, majf=0, minf=1 00:25:20.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.797 issued rwts: total=0,7619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.797 job9: (groupid=0, jobs=1): err= 0: pid=3280397: Sun Oct 13 17:35:28 2024 00:25:20.797 write: IOPS=593, BW=148MiB/s (156MB/s)(1495MiB/10079msec); 0 zone resets 00:25:20.797 slat (usec): min=17, max=49945, avg=1587.51, stdev=2955.76 00:25:20.797 clat (msec): min=7, max=170, avg=106.23, stdev=25.77 00:25:20.797 lat (msec): min=7, max=170, avg=107.82, stdev=26.05 00:25:20.797 clat percentiles (msec): 00:25:20.797 | 1.00th=[ 41], 5.00th=[ 66], 10.00th=[ 80], 20.00th=[ 85], 00:25:20.797 | 30.00th=[ 86], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 124], 00:25:20.797 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 134], 95.00th=[ 138], 00:25:20.797 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 171], 00:25:20.798 | 99.99th=[ 171] 00:25:20.798 bw ( KiB/s): min=116736, max=219136, per=8.08%, avg=151500.80, stdev=31890.96, samples=20 00:25:20.798 iops : min= 456, max= 856, avg=591.80, stdev=124.57, samples=20 00:25:20.798 lat (msec) : 10=0.03%, 20=0.02%, 50=1.54%, 100=42.62%, 250=55.79% 00:25:20.798 cpu : usr=1.26%, sys=1.92%, ctx=1732, majf=0, minf=1 00:25:20.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.798 issued rwts: total=0,5981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.798 job10: (groupid=0, jobs=1): err= 0: pid=3280398: Sun Oct 13 17:35:28 2024 00:25:20.798 write: IOPS=567, BW=142MiB/s (149MB/s)(1438MiB/10135msec); 0 zone resets 00:25:20.798 slat (usec): min=25, max=11616, avg=1724.23, stdev=3104.19 00:25:20.798 clat (msec): min=11, max=278, avg=111.04, stdev=32.22 00:25:20.798 lat (msec): min=11, max=278, avg=112.76, stdev=32.57 00:25:20.798 clat percentiles (msec): 00:25:20.798 | 1.00th=[ 44], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 77], 00:25:20.798 | 30.00th=[ 99], 40.00th=[ 122], 50.00th=[ 126], 60.00th=[ 128], 00:25:20.798 | 70.00th=[ 132], 80.00th=[ 133], 90.00th=[ 134], 95.00th=[ 144], 00:25:20.798 | 99.00th=[ 159], 99.50th=[ 213], 99.90th=[ 271], 99.95th=[ 271], 00:25:20.798 | 99.99th=[ 279] 00:25:20.798 bw ( KiB/s): min=113152, max=291840, per=7.76%, avg=145587.20, stdev=45917.81, samples=20 00:25:20.798 iops : min= 442, max= 1140, avg=568.70, stdev=179.37, samples=20 00:25:20.798 lat (msec) : 20=0.21%, 50=1.15%, 100=29.22%, 250=69.18%, 500=0.24% 00:25:20.798 cpu : usr=1.35%, sys=1.83%, ctx=1474, majf=0, minf=1 00:25:20.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.798 issued rwts: total=0,5750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.798 00:25:20.798 Run status group 0 (all jobs): 00:25:20.798 WRITE: bw=1831MiB/s (1920MB/s), 122MiB/s-189MiB/s (128MB/s-198MB/s), io=18.1GiB (19.5GB), run=10077-10135msec 00:25:20.798 00:25:20.798 Disk stats (read/write): 00:25:20.798 nvme0n1: ios=49/13703, merge=0/0, ticks=84/1201147, in_queue=1201231, util=96.69% 00:25:20.798 nvme10n1: ios=47/14509, merge=0/0, ticks=2450/1194736, in_queue=1197186, util=100.00% 00:25:20.798 nvme1n1: ios=47/14314, merge=0/0, ticks=193/1228067, in_queue=1228260, util=98.51% 00:25:20.798 nvme2n1: ios=45/11738, merge=0/0, ticks=2272/1224026, in_queue=1226298, util=100.00% 00:25:20.798 nvme3n1: ios=15/14465, merge=0/0, ticks=112/1225666, in_queue=1225778, util=97.73% 00:25:20.798 nvme4n1: ios=0/9874, merge=0/0, ticks=0/1223510, in_queue=1223510, util=97.72% 00:25:20.798 nvme5n1: ios=42/14635, merge=0/0, ticks=634/1193486, in_queue=1194120, util=100.00% 00:25:20.798 nvme6n1: ios=23/15229, merge=0/0, ticks=171/1230826, in_queue=1230997, util=99.38% 00:25:20.798 nvme7n1: ios=45/14866, merge=0/0, ticks=2472/1186692, in_queue=1189164, util=100.00% 00:25:20.798 nvme8n1: ios=0/11587, merge=0/0, ticks=0/1200141, in_queue=1200141, util=98.86% 00:25:20.798 nvme9n1: ios=0/11444, merge=0/0, ticks=0/1223885, in_queue=1223885, util=99.09% 00:25:20.798 17:35:28 -- target/multiconnection.sh@36 -- # sync 00:25:20.798 17:35:28 -- target/multiconnection.sh@37 -- # seq 1 11 00:25:20.798 17:35:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.798 17:35:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:20.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:20.798 17:35:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:20.798 17:35:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:20.798 17:35:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:20.798 17:35:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:25:20.798 17:35:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:20.798 17:35:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:25:20.798 17:35:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:20.798 17:35:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.798 17:35:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.798 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:25:20.798 17:35:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.798 17:35:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.798 17:35:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:20.798 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:20.798 17:35:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:20.798 17:35:29 -- common/autotest_common.sh@1198 -- # local i=0 00:25:20.798 17:35:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:20.798 17:35:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:25:20.798 17:35:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:20.798 17:35:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:25:20.798 17:35:29 -- common/autotest_common.sh@1210 -- # return 0 00:25:20.798 17:35:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:20.798 17:35:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.798 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:25:20.798 17:35:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.798 17:35:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.798 17:35:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:21.059 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:21.059 17:35:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:21.059 17:35:29 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.059 17:35:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.059 17:35:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:25:21.059 17:35:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:25:21.059 17:35:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.059 17:35:29 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.059 17:35:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:21.059 17:35:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.059 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.059 17:35:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.059 17:35:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.059 17:35:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:21.629 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:21.629 17:35:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:21.629 17:35:29 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.629 17:35:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.629 17:35:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:25:21.629 17:35:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.629 17:35:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:25:21.629 17:35:29 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.629 17:35:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:21.629 17:35:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.629 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 17:35:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.629 17:35:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.629 17:35:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:21.629 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:21.629 17:35:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:21.629 17:35:30 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.629 17:35:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.629 17:35:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:25:21.629 17:35:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.629 17:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:25:21.629 17:35:30 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.629 17:35:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:21.629 17:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.629 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 17:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.629 17:35:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.629 17:35:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:21.891 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:21.891 17:35:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:21.891 17:35:30 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.891 17:35:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.891 17:35:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:25:21.891 17:35:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.891 17:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:25:21.891 17:35:30 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.891 17:35:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:21.891 17:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.891 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:21.891 17:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.891 17:35:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.891 17:35:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:22.151 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:22.151 17:35:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:22.151 17:35:30 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.151 17:35:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.151 17:35:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:25:22.151 17:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:25:22.151 17:35:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.151 17:35:30 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.151 17:35:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:22.151 17:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.151 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:22.151 17:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.151 17:35:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.151 17:35:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:22.151 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:22.151 17:35:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:22.151 17:35:30 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.151 17:35:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.151 17:35:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:25:22.151 17:35:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.151 17:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:25:22.151 17:35:30 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.151 17:35:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:22.151 17:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.151 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:22.470 17:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.470 17:35:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.470 17:35:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:22.470 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:22.470 17:35:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:22.470 17:35:30 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.470 17:35:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.470 17:35:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:25:22.470 17:35:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.470 17:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:25:22.470 17:35:30 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.470 17:35:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:22.470 17:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.470 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:22.470 17:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.470 17:35:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.470 17:35:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:22.470 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:22.470 17:35:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:22.470 17:35:30 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.470 17:35:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.470 17:35:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:25:22.470 17:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:25:22.470 17:35:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.470 17:35:30 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.470 17:35:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:22.470 17:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.470 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:22.731 17:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.731 17:35:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.731 17:35:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:22.731 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:22.731 17:35:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:22.731 17:35:31 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.731 17:35:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.731 17:35:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:25:22.731 17:35:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.731 17:35:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:25:22.731 17:35:31 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.731 17:35:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:22.731 17:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.731 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:22.731 17:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.731 17:35:31 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:22.731 17:35:31 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:22.731 17:35:31 -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:22.731 17:35:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:22.731 17:35:31 -- nvmf/common.sh@116 -- # sync 00:25:22.731 17:35:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:22.731 17:35:31 -- nvmf/common.sh@119 -- # set +e 00:25:22.731 17:35:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:22.731 17:35:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:22.731 rmmod nvme_tcp 00:25:22.731 rmmod nvme_fabrics 00:25:22.731 rmmod nvme_keyring 00:25:22.731 17:35:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.731 17:35:31 -- nvmf/common.sh@123 -- # set -e 00:25:22.731 17:35:31 -- nvmf/common.sh@124 -- # return 0 00:25:22.731 17:35:31 -- nvmf/common.sh@477 -- # '[' -n 3269387 ']' 00:25:22.731 17:35:31 -- nvmf/common.sh@478 -- # killprocess 3269387 00:25:22.731 17:35:31 -- common/autotest_common.sh@926 -- # '[' -z 3269387 ']' 00:25:22.731 17:35:31 -- common/autotest_common.sh@930 -- # kill -0 3269387 00:25:22.731 17:35:31 -- common/autotest_common.sh@931 -- # uname 00:25:22.731 17:35:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:22.731 17:35:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3269387 00:25:22.992 17:35:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:22.992 17:35:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:22.992 17:35:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3269387' 00:25:22.992 killing process with pid 3269387 00:25:22.992 17:35:31 -- common/autotest_common.sh@945 -- # kill 3269387 00:25:22.992 17:35:31 -- common/autotest_common.sh@950 -- # wait 3269387 00:25:23.252 17:35:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:23.252 17:35:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:23.252 17:35:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:23.252 17:35:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.252 17:35:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:23.252 17:35:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.252 17:35:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.252 17:35:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.165 17:35:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:25.165 00:25:25.165 real 1m17.352s 00:25:25.165 user 4m52.499s 00:25:25.165 sys 0m22.301s 00:25:25.165 17:35:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.165 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:25:25.165 ************************************ 00:25:25.165 END TEST nvmf_multiconnection 00:25:25.165 ************************************ 00:25:25.165 17:35:33 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:25.165 17:35:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:25.165 17:35:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:25.165 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:25:25.165 ************************************ 00:25:25.165 START TEST nvmf_initiator_timeout 00:25:25.165 ************************************ 00:25:25.165 17:35:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:25.426 * Looking for test storage... 00:25:25.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.426 17:35:33 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.426 17:35:33 -- nvmf/common.sh@7 -- # uname -s 00:25:25.426 17:35:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.426 17:35:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.426 17:35:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.426 17:35:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.426 17:35:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.426 17:35:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.426 17:35:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.426 17:35:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.426 17:35:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.426 17:35:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.426 17:35:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:25.426 17:35:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:25.426 17:35:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.426 17:35:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.426 17:35:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.426 17:35:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.426 17:35:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.426 17:35:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.426 17:35:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.426 17:35:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.426 17:35:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.426 17:35:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.426 17:35:33 -- paths/export.sh@5 -- # export PATH 00:25:25.426 17:35:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.426 17:35:33 -- nvmf/common.sh@46 -- # : 0 00:25:25.426 17:35:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:25.426 17:35:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:25.426 17:35:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:25.426 17:35:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.426 17:35:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.426 17:35:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:25.426 17:35:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:25.426 17:35:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:25.426 17:35:33 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.426 17:35:33 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.426 17:35:33 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:25.426 17:35:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:25.426 17:35:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.426 17:35:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:25.426 17:35:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:25.426 17:35:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:25.426 17:35:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.426 17:35:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.426 17:35:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.426 17:35:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:25.426 17:35:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:25.426 17:35:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:25.426 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:25:33.599 17:35:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:33.599 17:35:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:33.599 17:35:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:33.599 17:35:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:33.599 17:35:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:33.599 17:35:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:33.599 17:35:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:33.599 17:35:40 -- nvmf/common.sh@294 -- # net_devs=() 00:25:33.599 17:35:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:33.599 17:35:40 -- nvmf/common.sh@295 -- # e810=() 00:25:33.599 17:35:40 -- nvmf/common.sh@295 -- # local -ga e810 00:25:33.599 17:35:40 -- nvmf/common.sh@296 -- # x722=() 00:25:33.599 17:35:40 -- nvmf/common.sh@296 -- # local -ga x722 00:25:33.600 17:35:40 -- nvmf/common.sh@297 -- # mlx=() 00:25:33.600 17:35:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:33.600 17:35:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.600 17:35:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:33.600 17:35:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:33.600 17:35:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:33.600 17:35:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:33.600 17:35:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:33.600 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:33.600 17:35:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:33.600 17:35:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:33.600 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:33.600 17:35:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:33.600 17:35:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:33.600 17:35:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.600 17:35:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:33.600 17:35:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.600 17:35:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:33.600 Found net devices under 0000:31:00.0: cvl_0_0 00:25:33.600 17:35:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.600 17:35:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:33.600 17:35:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.600 17:35:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:33.600 17:35:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.600 17:35:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:33.600 Found net devices under 0000:31:00.1: cvl_0_1 00:25:33.600 17:35:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.600 17:35:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:33.600 17:35:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:33.600 17:35:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:33.600 17:35:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:33.600 17:35:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.600 17:35:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.600 17:35:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.600 17:35:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:33.600 17:35:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.600 17:35:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.600 17:35:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:33.600 17:35:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.600 17:35:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.600 17:35:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:33.600 17:35:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:33.600 17:35:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.600 17:35:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.600 17:35:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.600 17:35:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.600 17:35:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:33.600 17:35:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.600 17:35:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.600 17:35:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.600 17:35:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:33.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:25:33.600 00:25:33.600 --- 10.0.0.2 ping statistics --- 00:25:33.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.600 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:25:33.600 17:35:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:33.600 00:25:33.600 --- 10.0.0.1 ping statistics --- 00:25:33.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.600 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:33.600 17:35:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.600 17:35:41 -- nvmf/common.sh@410 -- # return 0 00:25:33.600 17:35:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:33.600 17:35:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.600 17:35:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:33.600 17:35:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:33.600 17:35:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.600 17:35:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:33.600 17:35:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:33.600 17:35:41 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:33.600 17:35:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:33.600 17:35:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:33.600 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:25:33.600 17:35:41 -- nvmf/common.sh@469 -- # nvmfpid=3286998 00:25:33.600 17:35:41 -- nvmf/common.sh@470 -- # waitforlisten 3286998 00:25:33.600 17:35:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.600 17:35:41 -- common/autotest_common.sh@819 -- # '[' -z 3286998 ']' 00:25:33.600 17:35:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.600 17:35:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:33.600 17:35:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.600 17:35:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:33.600 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:25:33.600 [2024-10-13 17:35:41.327092] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:33.600 [2024-10-13 17:35:41.327152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.600 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.600 [2024-10-13 17:35:41.402140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.600 [2024-10-13 17:35:41.433002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:33.600 [2024-10-13 17:35:41.433136] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.600 [2024-10-13 17:35:41.433147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.600 [2024-10-13 17:35:41.433156] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.600 [2024-10-13 17:35:41.433289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.600 [2024-10-13 17:35:41.433406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.600 [2024-10-13 17:35:41.433560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.600 [2024-10-13 17:35:41.433561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.860 17:35:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.860 17:35:42 -- common/autotest_common.sh@852 -- # return 0 00:25:33.860 17:35:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:33.860 17:35:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 17:35:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.860 17:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 Malloc0 00:25:33.860 17:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:33.860 17:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 Delay0 00:25:33.860 17:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.860 17:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 [2024-10-13 17:35:42.255939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.860 17:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:33.860 17:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 17:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.860 17:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 17:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.860 17:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.860 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 [2024-10-13 17:35:42.296236] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.860 17:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.860 17:35:42 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:35.769 17:35:43 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:35.769 17:35:43 -- common/autotest_common.sh@1177 -- # local i=0 00:25:35.769 17:35:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.769 17:35:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:35.769 17:35:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:37.680 17:35:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:37.680 17:35:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:37.680 17:35:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:37.680 17:35:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:37.680 17:35:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.680 17:35:45 -- common/autotest_common.sh@1187 -- # return 0 00:25:37.680 17:35:45 -- target/initiator_timeout.sh@35 -- # fio_pid=3287835 00:25:37.680 17:35:45 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:37.680 17:35:45 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:37.680 [global] 00:25:37.680 thread=1 00:25:37.680 invalidate=1 00:25:37.680 rw=write 00:25:37.680 time_based=1 00:25:37.680 runtime=60 00:25:37.680 ioengine=libaio 00:25:37.680 direct=1 00:25:37.680 bs=4096 00:25:37.680 iodepth=1 00:25:37.680 norandommap=0 00:25:37.680 numjobs=1 00:25:37.680 00:25:37.680 verify_dump=1 00:25:37.680 verify_backlog=512 00:25:37.680 verify_state_save=0 00:25:37.680 do_verify=1 00:25:37.680 verify=crc32c-intel 00:25:37.680 [job0] 00:25:37.680 filename=/dev/nvme0n1 00:25:37.680 Could not set queue depth (nvme0n1) 00:25:37.680 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:37.680 fio-3.35 00:25:37.680 Starting 1 thread 00:25:40.980 17:35:48 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:40.980 17:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.980 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:40.980 true 00:25:40.980 17:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.980 17:35:48 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:40.980 17:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.980 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:40.980 true 00:25:40.980 17:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.980 17:35:48 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:40.980 17:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.980 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:40.980 true 00:25:40.980 17:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.980 17:35:48 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:40.980 17:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.980 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:40.980 true 00:25:40.980 17:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.980 17:35:48 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:43.526 17:35:51 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:43.526 17:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.526 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.526 true 00:25:43.526 17:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.526 17:35:51 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:43.526 17:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.526 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.526 true 00:25:43.526 17:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.526 17:35:51 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:43.526 17:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.526 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.526 true 00:25:43.526 17:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.526 17:35:51 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:43.526 17:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.526 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.526 true 00:25:43.526 17:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.526 17:35:51 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:43.526 17:35:51 -- target/initiator_timeout.sh@54 -- # wait 3287835 00:26:39.791 00:26:39.791 job0: (groupid=0, jobs=1): err= 0: pid=3288022: Sun Oct 13 17:36:46 2024 00:26:39.791 read: IOPS=162, BW=649KiB/s (664kB/s)(38.0MiB/60001msec) 00:26:39.791 slat (nsec): min=7202, max=65488, avg=26862.97, stdev=3031.01 00:26:39.791 clat (usec): min=538, max=42041, avg=1190.81, stdev=2957.36 00:26:39.791 lat (usec): min=565, max=42067, avg=1217.68, stdev=2957.36 00:26:39.791 clat percentiles (usec): 00:26:39.791 | 1.00th=[ 799], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 922], 00:26:39.791 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:26:39.791 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:26:39.791 | 99.00th=[ 1156], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:39.791 | 99.99th=[42206] 00:26:39.791 write: IOPS=168, BW=676KiB/s (692kB/s)(39.6MiB/60001msec); 0 zone resets 00:26:39.791 slat (usec): min=9, max=11273, avg=32.81, stdev=140.94 00:26:39.791 clat (usec): min=207, max=41794k, avg=4701.56, stdev=415103.83 00:26:39.791 lat (usec): min=224, max=41794k, avg=4734.37, stdev=415103.85 00:26:39.791 clat percentiles (usec): 00:26:39.791 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 494], 00:26:39.791 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:26:39.791 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 717], 00:26:39.791 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 848], 99.95th=[ 930], 00:26:39.791 | 99.99th=[ 4359] 00:26:39.791 bw ( KiB/s): min= 208, max= 4096, per=100.00%, avg=2432.00, stdev=1378.86, samples=32 00:26:39.791 iops : min= 52, max= 1024, avg=608.00, stdev=344.72, samples=32 00:26:39.791 lat (usec) : 250=0.10%, 500=11.25%, 750=39.02%, 1000=30.19% 00:26:39.791 lat (msec) : 2=19.17%, 10=0.01%, 50=0.26%, >=2000=0.01% 00:26:39.791 cpu : usr=0.43%, sys=1.09%, ctx=19871, majf=0, minf=1 00:26:39.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:39.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.791 issued rwts: total=9728,10137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:39.791 00:26:39.791 Run status group 0 (all jobs): 00:26:39.791 READ: bw=649KiB/s (664kB/s), 649KiB/s-649KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60001-60001msec 00:26:39.791 WRITE: bw=676KiB/s (692kB/s), 676KiB/s-676KiB/s (692kB/s-692kB/s), io=39.6MiB (41.5MB), run=60001-60001msec 00:26:39.791 00:26:39.791 Disk stats (read/write): 00:26:39.791 nvme0n1: ios=9827/9971, merge=0/0, ticks=11270/5585, in_queue=16855, util=99.65% 00:26:39.791 17:36:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:39.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:39.791 17:36:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:39.791 17:36:46 -- common/autotest_common.sh@1198 -- # local i=0 00:26:39.791 17:36:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:39.791 17:36:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:39.791 17:36:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:39.791 17:36:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:39.791 17:36:46 -- common/autotest_common.sh@1210 -- # return 0 00:26:39.791 17:36:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:39.791 17:36:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:39.791 nvmf hotplug test: fio successful as expected 00:26:39.791 17:36:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.792 17:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.792 17:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:39.792 17:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.792 17:36:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:39.792 17:36:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:39.792 17:36:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:39.792 17:36:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:39.792 17:36:46 -- nvmf/common.sh@116 -- # sync 00:26:39.792 17:36:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:39.792 17:36:46 -- nvmf/common.sh@119 -- # set +e 00:26:39.792 17:36:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:39.792 17:36:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:39.792 rmmod nvme_tcp 00:26:39.792 rmmod nvme_fabrics 00:26:39.792 rmmod nvme_keyring 00:26:39.792 17:36:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:39.792 17:36:46 -- nvmf/common.sh@123 -- # set -e 00:26:39.792 17:36:46 -- nvmf/common.sh@124 -- # return 0 00:26:39.792 17:36:46 -- nvmf/common.sh@477 -- # '[' -n 3286998 ']' 00:26:39.792 17:36:46 -- nvmf/common.sh@478 -- # killprocess 3286998 00:26:39.792 17:36:46 -- common/autotest_common.sh@926 -- # '[' -z 3286998 ']' 00:26:39.792 17:36:46 -- common/autotest_common.sh@930 -- # kill -0 3286998 00:26:39.792 17:36:46 -- common/autotest_common.sh@931 -- # uname 00:26:39.792 17:36:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:39.792 17:36:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3286998 00:26:39.792 17:36:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:39.792 17:36:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:39.792 17:36:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3286998' 00:26:39.792 killing process with pid 3286998 00:26:39.792 17:36:46 -- common/autotest_common.sh@945 -- # kill 3286998 00:26:39.792 17:36:46 -- common/autotest_common.sh@950 -- # wait 3286998 00:26:39.792 17:36:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:39.792 17:36:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:39.792 17:36:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:39.792 17:36:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.792 17:36:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:39.792 17:36:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.792 17:36:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.792 17:36:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.363 17:36:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:40.363 00:26:40.363 real 1m15.199s 00:26:40.363 user 4m38.665s 00:26:40.363 sys 0m8.376s 00:26:40.363 17:36:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.363 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:40.363 ************************************ 00:26:40.363 END TEST nvmf_initiator_timeout 00:26:40.363 ************************************ 00:26:40.623 17:36:48 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:26:40.623 17:36:48 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:26:40.623 17:36:48 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:26:40.623 17:36:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:40.623 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:48.764 17:36:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:48.764 17:36:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:48.764 17:36:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:48.764 17:36:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:48.764 17:36:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:48.764 17:36:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:48.764 17:36:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:48.764 17:36:56 -- nvmf/common.sh@294 -- # net_devs=() 00:26:48.764 17:36:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:48.764 17:36:56 -- nvmf/common.sh@295 -- # e810=() 00:26:48.764 17:36:56 -- nvmf/common.sh@295 -- # local -ga e810 00:26:48.764 17:36:56 -- nvmf/common.sh@296 -- # x722=() 00:26:48.764 17:36:56 -- nvmf/common.sh@296 -- # local -ga x722 00:26:48.764 17:36:56 -- nvmf/common.sh@297 -- # mlx=() 00:26:48.764 17:36:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:48.765 17:36:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.765 17:36:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:48.765 17:36:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:48.765 17:36:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:48.765 17:36:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:36:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:48.765 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:48.765 17:36:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:36:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:48.765 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:48.765 17:36:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:48.765 17:36:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:36:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.765 17:36:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:48.765 17:36:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.765 17:36:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:48.765 Found net devices under 0000:31:00.0: cvl_0_0 00:26:48.765 17:36:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.765 17:36:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:36:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.765 17:36:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:48.765 17:36:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.765 17:36:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:48.765 Found net devices under 0000:31:00.1: cvl_0_1 00:26:48.765 17:36:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.765 17:36:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:48.765 17:36:56 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.765 17:36:56 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:26:48.765 17:36:56 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:48.765 17:36:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:48.765 17:36:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:48.765 17:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:48.765 ************************************ 00:26:48.765 START TEST nvmf_perf_adq 00:26:48.765 ************************************ 00:26:48.765 17:36:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:48.765 * Looking for test storage... 00:26:48.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.765 17:36:56 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.765 17:36:56 -- nvmf/common.sh@7 -- # uname -s 00:26:48.765 17:36:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.765 17:36:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.765 17:36:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.765 17:36:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.765 17:36:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.765 17:36:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.765 17:36:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.765 17:36:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.765 17:36:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.765 17:36:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.765 17:36:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:48.765 17:36:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:48.765 17:36:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.765 17:36:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.765 17:36:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.765 17:36:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.765 17:36:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.765 17:36:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.765 17:36:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.765 17:36:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.765 17:36:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.765 17:36:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.765 17:36:56 -- paths/export.sh@5 -- # export PATH 00:26:48.765 17:36:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.765 17:36:56 -- nvmf/common.sh@46 -- # : 0 00:26:48.765 17:36:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:48.765 17:36:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:48.765 17:36:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:48.765 17:36:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.765 17:36:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.765 17:36:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:48.765 17:36:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:48.765 17:36:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:48.765 17:36:56 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:48.765 17:36:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:48.765 17:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:55.355 17:37:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:55.355 17:37:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:55.355 17:37:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:55.355 17:37:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:55.355 17:37:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:55.355 17:37:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:55.355 17:37:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:55.355 17:37:03 -- nvmf/common.sh@294 -- # net_devs=() 00:26:55.355 17:37:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:55.355 17:37:03 -- nvmf/common.sh@295 -- # e810=() 00:26:55.355 17:37:03 -- nvmf/common.sh@295 -- # local -ga e810 00:26:55.355 17:37:03 -- nvmf/common.sh@296 -- # x722=() 00:26:55.355 17:37:03 -- nvmf/common.sh@296 -- # local -ga x722 00:26:55.355 17:37:03 -- nvmf/common.sh@297 -- # mlx=() 00:26:55.355 17:37:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:55.355 17:37:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.355 17:37:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:55.355 17:37:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:55.355 17:37:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:55.355 17:37:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:55.355 17:37:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:55.355 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:55.355 17:37:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:55.355 17:37:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:55.355 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:55.355 17:37:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:55.355 17:37:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:55.355 17:37:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:55.355 17:37:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.355 17:37:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:55.355 17:37:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.355 17:37:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:55.355 Found net devices under 0000:31:00.0: cvl_0_0 00:26:55.355 17:37:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.355 17:37:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:55.355 17:37:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.355 17:37:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:55.355 17:37:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.355 17:37:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:55.355 Found net devices under 0000:31:00.1: cvl_0_1 00:26:55.355 17:37:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.355 17:37:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:55.355 17:37:03 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.355 17:37:03 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:55.355 17:37:03 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:55.355 17:37:03 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:55.355 17:37:03 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:56.742 17:37:05 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:58.655 17:37:07 -- target/perf_adq.sh@54 -- # sleep 5 00:27:03.953 17:37:12 -- target/perf_adq.sh@67 -- # nvmftestinit 00:27:03.953 17:37:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:03.953 17:37:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.953 17:37:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:03.953 17:37:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:03.953 17:37:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:03.953 17:37:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.953 17:37:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.953 17:37:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.953 17:37:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:03.953 17:37:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:03.953 17:37:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:03.953 17:37:12 -- common/autotest_common.sh@10 -- # set +x 00:27:03.953 17:37:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:03.953 17:37:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:03.953 17:37:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:03.953 17:37:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:03.953 17:37:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:03.953 17:37:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:03.953 17:37:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:03.953 17:37:12 -- nvmf/common.sh@294 -- # net_devs=() 00:27:03.953 17:37:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:03.953 17:37:12 -- nvmf/common.sh@295 -- # e810=() 00:27:03.953 17:37:12 -- nvmf/common.sh@295 -- # local -ga e810 00:27:03.953 17:37:12 -- nvmf/common.sh@296 -- # x722=() 00:27:03.953 17:37:12 -- nvmf/common.sh@296 -- # local -ga x722 00:27:03.953 17:37:12 -- nvmf/common.sh@297 -- # mlx=() 00:27:03.953 17:37:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:03.953 17:37:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.953 17:37:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:03.953 17:37:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:03.953 17:37:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:03.953 17:37:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:03.953 17:37:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:03.953 17:37:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:03.953 17:37:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:03.953 17:37:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:03.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:03.953 17:37:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:03.953 17:37:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:03.953 17:37:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:03.954 17:37:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:03.954 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:03.954 17:37:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:03.954 17:37:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:03.954 17:37:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.954 17:37:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:03.954 17:37:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.954 17:37:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:03.954 Found net devices under 0000:31:00.0: cvl_0_0 00:27:03.954 17:37:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.954 17:37:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:03.954 17:37:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.954 17:37:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:03.954 17:37:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.954 17:37:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:03.954 Found net devices under 0000:31:00.1: cvl_0_1 00:27:03.954 17:37:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.954 17:37:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:03.954 17:37:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:03.954 17:37:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:03.954 17:37:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:03.954 17:37:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.954 17:37:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.954 17:37:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.954 17:37:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:03.954 17:37:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.954 17:37:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.954 17:37:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:03.954 17:37:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.954 17:37:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.954 17:37:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:03.954 17:37:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:03.954 17:37:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.954 17:37:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.954 17:37:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.954 17:37:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.954 17:37:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:03.954 17:37:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.954 17:37:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.954 17:37:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.954 17:37:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:03.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:27:03.954 00:27:03.954 --- 10.0.0.2 ping statistics --- 00:27:03.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.954 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:27:03.954 17:37:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:04.215 00:27:04.215 --- 10.0.0.1 ping statistics --- 00:27:04.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.215 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:04.215 17:37:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.215 17:37:12 -- nvmf/common.sh@410 -- # return 0 00:27:04.215 17:37:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:04.215 17:37:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.215 17:37:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:04.215 17:37:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:04.215 17:37:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.215 17:37:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:04.215 17:37:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:04.215 17:37:12 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:04.215 17:37:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:04.215 17:37:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:04.215 17:37:12 -- common/autotest_common.sh@10 -- # set +x 00:27:04.215 17:37:12 -- nvmf/common.sh@469 -- # nvmfpid=3309856 00:27:04.215 17:37:12 -- nvmf/common.sh@470 -- # waitforlisten 3309856 00:27:04.215 17:37:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:04.215 17:37:12 -- common/autotest_common.sh@819 -- # '[' -z 3309856 ']' 00:27:04.215 17:37:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.215 17:37:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:04.215 17:37:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.215 17:37:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:04.215 17:37:12 -- common/autotest_common.sh@10 -- # set +x 00:27:04.215 [2024-10-13 17:37:12.583233] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:04.215 [2024-10-13 17:37:12.583285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.215 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.215 [2024-10-13 17:37:12.651790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.215 [2024-10-13 17:37:12.681124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:04.215 [2024-10-13 17:37:12.681259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.215 [2024-10-13 17:37:12.681269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.215 [2024-10-13 17:37:12.681278] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.215 [2024-10-13 17:37:12.681419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.215 [2024-10-13 17:37:12.681539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.215 [2024-10-13 17:37:12.681695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.215 [2024-10-13 17:37:12.681696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.158 17:37:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:05.158 17:37:13 -- common/autotest_common.sh@852 -- # return 0 00:27:05.158 17:37:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:05.158 17:37:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 17:37:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.158 17:37:13 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:27:05.158 17:37:13 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 [2024-10-13 17:37:13.490012] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 Malloc1 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.158 17:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.158 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.158 [2024-10-13 17:37:13.545330] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.158 17:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.158 17:37:13 -- target/perf_adq.sh@73 -- # perfpid=3310210 00:27:05.158 17:37:13 -- target/perf_adq.sh@74 -- # sleep 2 00:27:05.158 17:37:13 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:05.158 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.070 17:37:15 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:27:07.070 17:37:15 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:07.070 17:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:07.070 17:37:15 -- target/perf_adq.sh@76 -- # wc -l 00:27:07.070 17:37:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.070 17:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:07.331 17:37:15 -- target/perf_adq.sh@76 -- # count=4 00:27:07.331 17:37:15 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:27:07.331 17:37:15 -- target/perf_adq.sh@81 -- # wait 3310210 00:27:15.602 Initializing NVMe Controllers 00:27:15.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:15.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:15.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:15.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:15.602 Initialization complete. Launching workers. 00:27:15.602 ======================================================== 00:27:15.602 Latency(us) 00:27:15.602 Device Information : IOPS MiB/s Average min max 00:27:15.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14894.80 58.18 4297.12 1067.94 9173.55 00:27:15.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15252.60 59.58 4196.61 970.95 9631.26 00:27:15.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13882.90 54.23 4609.80 1109.12 11753.90 00:27:15.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12449.40 48.63 5157.11 1429.23 45672.68 00:27:15.602 ======================================================== 00:27:15.602 Total : 56479.70 220.62 4536.39 970.95 45672.68 00:27:15.602 00:27:15.602 17:37:23 -- target/perf_adq.sh@82 -- # nvmftestfini 00:27:15.602 17:37:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:15.602 17:37:23 -- nvmf/common.sh@116 -- # sync 00:27:15.602 17:37:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:15.602 17:37:23 -- nvmf/common.sh@119 -- # set +e 00:27:15.602 17:37:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:15.602 17:37:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:15.602 rmmod nvme_tcp 00:27:15.602 rmmod nvme_fabrics 00:27:15.602 rmmod nvme_keyring 00:27:15.602 17:37:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:15.602 17:37:23 -- nvmf/common.sh@123 -- # set -e 00:27:15.602 17:37:23 -- nvmf/common.sh@124 -- # return 0 00:27:15.602 17:37:23 -- nvmf/common.sh@477 -- # '[' -n 3309856 ']' 00:27:15.602 17:37:23 -- nvmf/common.sh@478 -- # killprocess 3309856 00:27:15.602 17:37:23 -- common/autotest_common.sh@926 -- # '[' -z 3309856 ']' 00:27:15.602 17:37:23 -- common/autotest_common.sh@930 -- # kill -0 3309856 00:27:15.602 17:37:23 -- common/autotest_common.sh@931 -- # uname 00:27:15.602 17:37:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:15.602 17:37:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3309856 00:27:15.602 17:37:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:15.602 17:37:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:15.602 17:37:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3309856' 00:27:15.602 killing process with pid 3309856 00:27:15.602 17:37:23 -- common/autotest_common.sh@945 -- # kill 3309856 00:27:15.602 17:37:23 -- common/autotest_common.sh@950 -- # wait 3309856 00:27:15.602 17:37:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:15.602 17:37:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:15.602 17:37:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:15.602 17:37:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.602 17:37:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:15.602 17:37:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.602 17:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.602 17:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.145 17:37:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:18.145 17:37:26 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:27:18.145 17:37:26 -- target/perf_adq.sh@52 -- # rmmod ice 00:27:19.528 17:37:27 -- target/perf_adq.sh@53 -- # modprobe ice 00:27:21.437 17:37:29 -- target/perf_adq.sh@54 -- # sleep 5 00:27:26.724 17:37:34 -- target/perf_adq.sh@87 -- # nvmftestinit 00:27:26.724 17:37:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:26.724 17:37:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.724 17:37:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:26.724 17:37:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:26.724 17:37:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:26.724 17:37:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.724 17:37:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.724 17:37:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.724 17:37:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:26.724 17:37:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:26.724 17:37:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:26.724 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:27:26.724 17:37:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:26.724 17:37:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:26.724 17:37:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:26.724 17:37:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:26.724 17:37:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:26.724 17:37:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:26.724 17:37:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:26.724 17:37:34 -- nvmf/common.sh@294 -- # net_devs=() 00:27:26.724 17:37:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:26.724 17:37:34 -- nvmf/common.sh@295 -- # e810=() 00:27:26.724 17:37:34 -- nvmf/common.sh@295 -- # local -ga e810 00:27:26.724 17:37:34 -- nvmf/common.sh@296 -- # x722=() 00:27:26.724 17:37:34 -- nvmf/common.sh@296 -- # local -ga x722 00:27:26.724 17:37:34 -- nvmf/common.sh@297 -- # mlx=() 00:27:26.724 17:37:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:26.724 17:37:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.724 17:37:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.725 17:37:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.725 17:37:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:26.725 17:37:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:26.725 17:37:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:26.725 17:37:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:26.725 17:37:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:26.725 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:26.725 17:37:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:26.725 17:37:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:26.725 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:26.725 17:37:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:26.725 17:37:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:26.725 17:37:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.725 17:37:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:26.725 17:37:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.725 17:37:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:26.725 Found net devices under 0000:31:00.0: cvl_0_0 00:27:26.725 17:37:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.725 17:37:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:26.725 17:37:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.725 17:37:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:26.725 17:37:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.725 17:37:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:26.725 Found net devices under 0000:31:00.1: cvl_0_1 00:27:26.725 17:37:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.725 17:37:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:26.725 17:37:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:26.725 17:37:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:26.725 17:37:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:26.725 17:37:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.725 17:37:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.725 17:37:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.725 17:37:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:26.725 17:37:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.725 17:37:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.725 17:37:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:26.725 17:37:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.725 17:37:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.725 17:37:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:26.725 17:37:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:26.725 17:37:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.725 17:37:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.725 17:37:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.725 17:37:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.725 17:37:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:26.725 17:37:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.725 17:37:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.725 17:37:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.725 17:37:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:26.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:27:26.725 00:27:26.725 --- 10.0.0.2 ping statistics --- 00:27:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.725 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:27:26.725 17:37:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:27:26.725 00:27:26.725 --- 10.0.0.1 ping statistics --- 00:27:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.725 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:27:26.725 17:37:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.725 17:37:35 -- nvmf/common.sh@410 -- # return 0 00:27:26.725 17:37:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:26.725 17:37:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.725 17:37:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:26.725 17:37:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:26.725 17:37:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.725 17:37:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:26.725 17:37:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:26.725 17:37:35 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:27:26.725 17:37:35 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:26.725 17:37:35 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:26.725 17:37:35 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:26.725 net.core.busy_poll = 1 00:27:26.725 17:37:35 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:26.725 net.core.busy_read = 1 00:27:26.725 17:37:35 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:26.725 17:37:35 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:26.986 17:37:35 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:26.986 17:37:35 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:26.986 17:37:35 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:26.986 17:37:35 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:26.986 17:37:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:26.986 17:37:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:26.986 17:37:35 -- common/autotest_common.sh@10 -- # set +x 00:27:26.986 17:37:35 -- nvmf/common.sh@469 -- # nvmfpid=3314749 00:27:26.986 17:37:35 -- nvmf/common.sh@470 -- # waitforlisten 3314749 00:27:26.986 17:37:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:26.986 17:37:35 -- common/autotest_common.sh@819 -- # '[' -z 3314749 ']' 00:27:26.986 17:37:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.986 17:37:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:26.986 17:37:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.986 17:37:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:26.986 17:37:35 -- common/autotest_common.sh@10 -- # set +x 00:27:26.986 [2024-10-13 17:37:35.493449] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:26.986 [2024-10-13 17:37:35.493518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.246 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.246 [2024-10-13 17:37:35.567360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.246 [2024-10-13 17:37:35.605246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:27.246 [2024-10-13 17:37:35.605391] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.246 [2024-10-13 17:37:35.605401] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.246 [2024-10-13 17:37:35.605409] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.246 [2024-10-13 17:37:35.605555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.246 [2024-10-13 17:37:35.605676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.247 [2024-10-13 17:37:35.605837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.247 [2024-10-13 17:37:35.605838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.818 17:37:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:27.818 17:37:36 -- common/autotest_common.sh@852 -- # return 0 00:27:27.818 17:37:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:27.818 17:37:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:27.818 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:27.818 17:37:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.818 17:37:36 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:27:27.818 17:37:36 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:27.818 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.818 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:27.818 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.818 17:37:36 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:27:27.818 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.818 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.079 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.079 17:37:36 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:28.079 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.079 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.079 [2024-10-13 17:37:36.411006] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.079 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.079 17:37:36 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:28.079 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.079 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.079 Malloc1 00:27:28.079 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.079 17:37:36 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.079 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.079 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.079 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.079 17:37:36 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:28.079 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.079 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.079 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.079 17:37:36 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.079 17:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.079 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.079 [2024-10-13 17:37:36.466329] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.079 17:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.079 17:37:36 -- target/perf_adq.sh@94 -- # perfpid=3315030 00:27:28.079 17:37:36 -- target/perf_adq.sh@95 -- # sleep 2 00:27:28.079 17:37:36 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:28.079 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.992 17:37:38 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:27:29.992 17:37:38 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:29.992 17:37:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.992 17:37:38 -- target/perf_adq.sh@97 -- # wc -l 00:27:29.992 17:37:38 -- common/autotest_common.sh@10 -- # set +x 00:27:29.992 17:37:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.253 17:37:38 -- target/perf_adq.sh@97 -- # count=2 00:27:30.253 17:37:38 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:27:30.253 17:37:38 -- target/perf_adq.sh@103 -- # wait 3315030 00:27:38.391 Initializing NVMe Controllers 00:27:38.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:38.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:38.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:38.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:38.391 Initialization complete. Launching workers. 00:27:38.391 ======================================================== 00:27:38.391 Latency(us) 00:27:38.391 Device Information : IOPS MiB/s Average min max 00:27:38.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13769.09 53.79 4648.84 965.88 45966.81 00:27:38.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7957.38 31.08 8060.27 1000.80 52467.79 00:27:38.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7086.09 27.68 9034.32 981.12 50653.93 00:27:38.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7755.78 30.30 8254.67 715.89 52115.41 00:27:38.391 ======================================================== 00:27:38.391 Total : 36568.35 142.85 7005.74 715.89 52467.79 00:27:38.391 00:27:38.391 17:37:46 -- target/perf_adq.sh@104 -- # nvmftestfini 00:27:38.391 17:37:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:38.391 17:37:46 -- nvmf/common.sh@116 -- # sync 00:27:38.391 17:37:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:38.391 17:37:46 -- nvmf/common.sh@119 -- # set +e 00:27:38.391 17:37:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:38.391 17:37:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:38.391 rmmod nvme_tcp 00:27:38.391 rmmod nvme_fabrics 00:27:38.391 rmmod nvme_keyring 00:27:38.391 17:37:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:38.391 17:37:46 -- nvmf/common.sh@123 -- # set -e 00:27:38.391 17:37:46 -- nvmf/common.sh@124 -- # return 0 00:27:38.391 17:37:46 -- nvmf/common.sh@477 -- # '[' -n 3314749 ']' 00:27:38.391 17:37:46 -- nvmf/common.sh@478 -- # killprocess 3314749 00:27:38.391 17:37:46 -- common/autotest_common.sh@926 -- # '[' -z 3314749 ']' 00:27:38.391 17:37:46 -- common/autotest_common.sh@930 -- # kill -0 3314749 00:27:38.391 17:37:46 -- common/autotest_common.sh@931 -- # uname 00:27:38.391 17:37:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:38.391 17:37:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3314749 00:27:38.391 17:37:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:38.391 17:37:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:38.391 17:37:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3314749' 00:27:38.391 killing process with pid 3314749 00:27:38.391 17:37:46 -- common/autotest_common.sh@945 -- # kill 3314749 00:27:38.391 17:37:46 -- common/autotest_common.sh@950 -- # wait 3314749 00:27:38.652 17:37:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:38.652 17:37:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:38.652 17:37:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:38.652 17:37:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.652 17:37:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:38.652 17:37:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.652 17:37:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.652 17:37:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.571 17:37:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:40.571 17:37:48 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:27:40.571 00:27:40.571 real 0m52.897s 00:27:40.571 user 2m48.861s 00:27:40.571 sys 0m11.721s 00:27:40.571 17:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.571 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:40.571 ************************************ 00:27:40.571 END TEST nvmf_perf_adq 00:27:40.571 ************************************ 00:27:40.571 17:37:49 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:40.571 17:37:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:40.571 17:37:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.571 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:27:40.571 ************************************ 00:27:40.571 START TEST nvmf_shutdown 00:27:40.571 ************************************ 00:27:40.571 17:37:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:40.832 * Looking for test storage... 00:27:40.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:40.832 17:37:49 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.832 17:37:49 -- nvmf/common.sh@7 -- # uname -s 00:27:40.832 17:37:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.832 17:37:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.832 17:37:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.832 17:37:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.832 17:37:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.832 17:37:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.832 17:37:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.832 17:37:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.832 17:37:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.832 17:37:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.832 17:37:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:40.832 17:37:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:40.832 17:37:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.832 17:37:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.832 17:37:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.832 17:37:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.832 17:37:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.832 17:37:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.832 17:37:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.832 17:37:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.832 17:37:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.832 17:37:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.832 17:37:49 -- paths/export.sh@5 -- # export PATH 00:27:40.832 17:37:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.832 17:37:49 -- nvmf/common.sh@46 -- # : 0 00:27:40.832 17:37:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:40.832 17:37:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:40.832 17:37:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:40.832 17:37:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.832 17:37:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.832 17:37:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:40.832 17:37:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:40.832 17:37:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:40.832 17:37:49 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.832 17:37:49 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:40.832 17:37:49 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:40.832 17:37:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:40.832 17:37:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.832 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:27:40.832 ************************************ 00:27:40.832 START TEST nvmf_shutdown_tc1 00:27:40.832 ************************************ 00:27:40.832 17:37:49 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:27:40.832 17:37:49 -- target/shutdown.sh@74 -- # starttarget 00:27:40.832 17:37:49 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:40.832 17:37:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:40.832 17:37:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.832 17:37:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:40.832 17:37:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:40.832 17:37:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:40.832 17:37:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.832 17:37:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.832 17:37:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.832 17:37:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:40.832 17:37:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:40.832 17:37:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:40.832 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 17:37:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:48.978 17:37:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:48.978 17:37:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:48.978 17:37:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:48.978 17:37:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:48.978 17:37:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:48.978 17:37:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:48.978 17:37:56 -- nvmf/common.sh@294 -- # net_devs=() 00:27:48.978 17:37:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:48.978 17:37:56 -- nvmf/common.sh@295 -- # e810=() 00:27:48.978 17:37:56 -- nvmf/common.sh@295 -- # local -ga e810 00:27:48.978 17:37:56 -- nvmf/common.sh@296 -- # x722=() 00:27:48.978 17:37:56 -- nvmf/common.sh@296 -- # local -ga x722 00:27:48.978 17:37:56 -- nvmf/common.sh@297 -- # mlx=() 00:27:48.978 17:37:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:48.978 17:37:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.978 17:37:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:48.978 17:37:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:48.978 17:37:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:48.978 17:37:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:48.978 17:37:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:48.978 17:37:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:48.978 17:37:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.978 17:37:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:48.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:48.978 17:37:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.978 17:37:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.979 17:37:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:48.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:48.979 17:37:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:48.979 17:37:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.979 17:37:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.979 17:37:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.979 17:37:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.979 17:37:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:48.979 Found net devices under 0000:31:00.0: cvl_0_0 00:27:48.979 17:37:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.979 17:37:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.979 17:37:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.979 17:37:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.979 17:37:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.979 17:37:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:48.979 Found net devices under 0000:31:00.1: cvl_0_1 00:27:48.979 17:37:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.979 17:37:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:48.979 17:37:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:48.979 17:37:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:48.979 17:37:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.979 17:37:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.979 17:37:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.979 17:37:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:48.979 17:37:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.979 17:37:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.979 17:37:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:48.979 17:37:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.979 17:37:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.979 17:37:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:48.979 17:37:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:48.979 17:37:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.979 17:37:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.979 17:37:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.979 17:37:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.979 17:37:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:48.979 17:37:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.979 17:37:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.979 17:37:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.979 17:37:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:48.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:27:48.979 00:27:48.979 --- 10.0.0.2 ping statistics --- 00:27:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.979 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:27:48.979 17:37:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:27:48.979 00:27:48.979 --- 10.0.0.1 ping statistics --- 00:27:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.979 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:27:48.979 17:37:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.979 17:37:56 -- nvmf/common.sh@410 -- # return 0 00:27:48.979 17:37:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:48.979 17:37:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.979 17:37:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:48.979 17:37:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.979 17:37:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:48.979 17:37:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:48.979 17:37:56 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:48.979 17:37:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:48.979 17:37:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:48.979 17:37:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.979 17:37:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:48.979 17:37:56 -- nvmf/common.sh@469 -- # nvmfpid=3321320 00:27:48.979 17:37:56 -- nvmf/common.sh@470 -- # waitforlisten 3321320 00:27:48.979 17:37:56 -- common/autotest_common.sh@819 -- # '[' -z 3321320 ']' 00:27:48.979 17:37:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.979 17:37:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.979 17:37:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.979 17:37:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.979 17:37:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.979 [2024-10-13 17:37:56.700509] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:48.979 [2024-10-13 17:37:56.700565] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.979 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.979 [2024-10-13 17:37:56.769565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:48.979 [2024-10-13 17:37:56.800385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:48.979 [2024-10-13 17:37:56.800515] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.979 [2024-10-13 17:37:56.800525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.979 [2024-10-13 17:37:56.800533] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.979 [2024-10-13 17:37:56.800644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.979 [2024-10-13 17:37:56.800766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.979 [2024-10-13 17:37:56.801039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.979 [2024-10-13 17:37:56.801039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.240 17:37:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.240 17:37:57 -- common/autotest_common.sh@852 -- # return 0 00:27:49.240 17:37:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:49.240 17:37:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:49.240 17:37:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.240 17:37:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.240 17:37:57 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.240 17:37:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.240 17:37:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.240 [2024-10-13 17:37:57.581593] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.240 17:37:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.240 17:37:57 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:49.240 17:37:57 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:49.240 17:37:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:49.240 17:37:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.240 17:37:57 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.240 17:37:57 -- target/shutdown.sh@28 -- # cat 00:27:49.240 17:37:57 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:49.240 17:37:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.240 17:37:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.240 Malloc1 00:27:49.240 [2024-10-13 17:37:57.685094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.240 Malloc2 00:27:49.240 Malloc3 00:27:49.501 Malloc4 00:27:49.501 Malloc5 00:27:49.501 Malloc6 00:27:49.501 Malloc7 00:27:49.501 Malloc8 00:27:49.501 Malloc9 00:27:49.501 Malloc10 00:27:49.762 17:37:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.762 17:37:58 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:49.762 17:37:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:49.762 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:27:49.762 17:37:58 -- target/shutdown.sh@78 -- # perfpid=3321631 00:27:49.762 17:37:58 -- target/shutdown.sh@79 -- # waitforlisten 3321631 /var/tmp/bdevperf.sock 00:27:49.762 17:37:58 -- common/autotest_common.sh@819 -- # '[' -z 3321631 ']' 00:27:49.762 17:37:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.762 17:37:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:49.762 17:37:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.762 17:37:58 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:49.762 17:37:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:49.762 17:37:58 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:49.762 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:27:49.762 17:37:58 -- nvmf/common.sh@520 -- # config=() 00:27:49.762 17:37:58 -- nvmf/common.sh@520 -- # local subsystem config 00:27:49.762 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.762 { 00:27:49.762 "params": { 00:27:49.762 "name": "Nvme$subsystem", 00:27:49.762 "trtype": "$TEST_TRANSPORT", 00:27:49.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.762 "adrfam": "ipv4", 00:27:49.762 "trsvcid": "$NVMF_PORT", 00:27:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.762 "hdgst": ${hdgst:-false}, 00:27:49.762 "ddgst": ${ddgst:-false} 00:27:49.762 }, 00:27:49.762 "method": "bdev_nvme_attach_controller" 00:27:49.762 } 00:27:49.762 EOF 00:27:49.762 )") 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.762 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.762 { 00:27:49.762 "params": { 00:27:49.762 "name": "Nvme$subsystem", 00:27:49.762 "trtype": "$TEST_TRANSPORT", 00:27:49.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.762 "adrfam": "ipv4", 00:27:49.762 "trsvcid": "$NVMF_PORT", 00:27:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.762 "hdgst": ${hdgst:-false}, 00:27:49.762 "ddgst": ${ddgst:-false} 00:27:49.762 }, 00:27:49.762 "method": "bdev_nvme_attach_controller" 00:27:49.762 } 00:27:49.762 EOF 00:27:49.762 )") 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.762 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.762 { 00:27:49.762 "params": { 00:27:49.762 "name": "Nvme$subsystem", 00:27:49.762 "trtype": "$TEST_TRANSPORT", 00:27:49.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.762 "adrfam": "ipv4", 00:27:49.762 "trsvcid": "$NVMF_PORT", 00:27:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.762 "hdgst": ${hdgst:-false}, 00:27:49.762 "ddgst": ${ddgst:-false} 00:27:49.762 }, 00:27:49.762 "method": "bdev_nvme_attach_controller" 00:27:49.762 } 00:27:49.762 EOF 00:27:49.762 )") 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.762 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.762 { 00:27:49.762 "params": { 00:27:49.762 "name": "Nvme$subsystem", 00:27:49.762 "trtype": "$TEST_TRANSPORT", 00:27:49.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.762 "adrfam": "ipv4", 00:27:49.762 "trsvcid": "$NVMF_PORT", 00:27:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.762 "hdgst": ${hdgst:-false}, 00:27:49.762 "ddgst": ${ddgst:-false} 00:27:49.762 }, 00:27:49.762 "method": "bdev_nvme_attach_controller" 00:27:49.762 } 00:27:49.762 EOF 00:27:49.762 )") 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.762 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.762 { 00:27:49.762 "params": { 00:27:49.762 "name": "Nvme$subsystem", 00:27:49.762 "trtype": "$TEST_TRANSPORT", 00:27:49.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.762 "adrfam": "ipv4", 00:27:49.762 "trsvcid": "$NVMF_PORT", 00:27:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.762 "hdgst": ${hdgst:-false}, 00:27:49.762 "ddgst": ${ddgst:-false} 00:27:49.762 }, 00:27:49.762 "method": "bdev_nvme_attach_controller" 00:27:49.762 } 00:27:49.762 EOF 00:27:49.762 )") 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.762 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.762 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.762 { 00:27:49.762 "params": { 00:27:49.762 "name": "Nvme$subsystem", 00:27:49.762 "trtype": "$TEST_TRANSPORT", 00:27:49.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "$NVMF_PORT", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.763 "hdgst": ${hdgst:-false}, 00:27:49.763 "ddgst": ${ddgst:-false} 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 } 00:27:49.763 EOF 00:27:49.763 )") 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.763 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.763 { 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme$subsystem", 00:27:49.763 "trtype": "$TEST_TRANSPORT", 00:27:49.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "$NVMF_PORT", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.763 "hdgst": ${hdgst:-false}, 00:27:49.763 "ddgst": ${ddgst:-false} 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 } 00:27:49.763 EOF 00:27:49.763 )") 00:27:49.763 [2024-10-13 17:37:58.141008] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:49.763 [2024-10-13 17:37:58.141104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.763 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.763 { 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme$subsystem", 00:27:49.763 "trtype": "$TEST_TRANSPORT", 00:27:49.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "$NVMF_PORT", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.763 "hdgst": ${hdgst:-false}, 00:27:49.763 "ddgst": ${ddgst:-false} 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 } 00:27:49.763 EOF 00:27:49.763 )") 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.763 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.763 { 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme$subsystem", 00:27:49.763 "trtype": "$TEST_TRANSPORT", 00:27:49.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "$NVMF_PORT", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.763 "hdgst": ${hdgst:-false}, 00:27:49.763 "ddgst": ${ddgst:-false} 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 } 00:27:49.763 EOF 00:27:49.763 )") 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.763 17:37:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:49.763 { 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme$subsystem", 00:27:49.763 "trtype": "$TEST_TRANSPORT", 00:27:49.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "$NVMF_PORT", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.763 "hdgst": ${hdgst:-false}, 00:27:49.763 "ddgst": ${ddgst:-false} 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 } 00:27:49.763 EOF 00:27:49.763 )") 00:27:49.763 17:37:58 -- nvmf/common.sh@542 -- # cat 00:27:49.763 17:37:58 -- nvmf/common.sh@544 -- # jq . 00:27:49.763 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.763 17:37:58 -- nvmf/common.sh@545 -- # IFS=, 00:27:49.763 17:37:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme1", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme2", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme3", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme4", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme5", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme6", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme7", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme8", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme9", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 },{ 00:27:49.763 "params": { 00:27:49.763 "name": "Nvme10", 00:27:49.763 "trtype": "tcp", 00:27:49.763 "traddr": "10.0.0.2", 00:27:49.763 "adrfam": "ipv4", 00:27:49.763 "trsvcid": "4420", 00:27:49.763 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:49.763 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:49.763 "hdgst": false, 00:27:49.763 "ddgst": false 00:27:49.763 }, 00:27:49.763 "method": "bdev_nvme_attach_controller" 00:27:49.763 }' 00:27:49.764 [2024-10-13 17:37:58.208200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.764 [2024-10-13 17:37:58.237259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.309 17:38:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:52.309 17:38:00 -- common/autotest_common.sh@852 -- # return 0 00:27:52.309 17:38:00 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:52.309 17:38:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.309 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:27:52.309 17:38:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.309 17:38:00 -- target/shutdown.sh@83 -- # kill -9 3321631 00:27:52.309 17:38:00 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:52.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3321631 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:52.309 17:38:00 -- target/shutdown.sh@87 -- # sleep 1 00:27:52.881 17:38:01 -- target/shutdown.sh@88 -- # kill -0 3321320 00:27:52.881 17:38:01 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:52.881 17:38:01 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:52.881 17:38:01 -- nvmf/common.sh@520 -- # config=() 00:27:52.881 17:38:01 -- nvmf/common.sh@520 -- # local subsystem config 00:27:52.881 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.881 { 00:27:52.881 "params": { 00:27:52.881 "name": "Nvme$subsystem", 00:27:52.881 "trtype": "$TEST_TRANSPORT", 00:27:52.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.881 "adrfam": "ipv4", 00:27:52.881 "trsvcid": "$NVMF_PORT", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.881 "hdgst": ${hdgst:-false}, 00:27:52.881 "ddgst": ${ddgst:-false} 00:27:52.881 }, 00:27:52.881 "method": "bdev_nvme_attach_controller" 00:27:52.881 } 00:27:52.881 EOF 00:27:52.881 )") 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.881 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.881 { 00:27:52.881 "params": { 00:27:52.881 "name": "Nvme$subsystem", 00:27:52.881 "trtype": "$TEST_TRANSPORT", 00:27:52.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.881 "adrfam": "ipv4", 00:27:52.881 "trsvcid": "$NVMF_PORT", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.881 "hdgst": ${hdgst:-false}, 00:27:52.881 "ddgst": ${ddgst:-false} 00:27:52.881 }, 00:27:52.881 "method": "bdev_nvme_attach_controller" 00:27:52.881 } 00:27:52.881 EOF 00:27:52.881 )") 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.881 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.881 { 00:27:52.881 "params": { 00:27:52.881 "name": "Nvme$subsystem", 00:27:52.881 "trtype": "$TEST_TRANSPORT", 00:27:52.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.881 "adrfam": "ipv4", 00:27:52.881 "trsvcid": "$NVMF_PORT", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.881 "hdgst": ${hdgst:-false}, 00:27:52.881 "ddgst": ${ddgst:-false} 00:27:52.881 }, 00:27:52.881 "method": "bdev_nvme_attach_controller" 00:27:52.881 } 00:27:52.881 EOF 00:27:52.881 )") 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.881 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.881 { 00:27:52.881 "params": { 00:27:52.881 "name": "Nvme$subsystem", 00:27:52.881 "trtype": "$TEST_TRANSPORT", 00:27:52.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.881 "adrfam": "ipv4", 00:27:52.881 "trsvcid": "$NVMF_PORT", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.881 "hdgst": ${hdgst:-false}, 00:27:52.881 "ddgst": ${ddgst:-false} 00:27:52.881 }, 00:27:52.881 "method": "bdev_nvme_attach_controller" 00:27:52.881 } 00:27:52.881 EOF 00:27:52.881 )") 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.881 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.881 { 00:27:52.881 "params": { 00:27:52.881 "name": "Nvme$subsystem", 00:27:52.881 "trtype": "$TEST_TRANSPORT", 00:27:52.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.881 "adrfam": "ipv4", 00:27:52.881 "trsvcid": "$NVMF_PORT", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.881 "hdgst": ${hdgst:-false}, 00:27:52.881 "ddgst": ${ddgst:-false} 00:27:52.881 }, 00:27:52.881 "method": "bdev_nvme_attach_controller" 00:27:52.881 } 00:27:52.881 EOF 00:27:52.881 )") 00:27:52.881 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.881 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.882 { 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme$subsystem", 00:27:52.882 "trtype": "$TEST_TRANSPORT", 00:27:52.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "$NVMF_PORT", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.882 "hdgst": ${hdgst:-false}, 00:27:52.882 "ddgst": ${ddgst:-false} 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 } 00:27:52.882 EOF 00:27:52.882 )") 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.882 [2024-10-13 17:38:01.294846] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:52.882 [2024-10-13 17:38:01.294898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322155 ] 00:27:52.882 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.882 { 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme$subsystem", 00:27:52.882 "trtype": "$TEST_TRANSPORT", 00:27:52.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "$NVMF_PORT", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.882 "hdgst": ${hdgst:-false}, 00:27:52.882 "ddgst": ${ddgst:-false} 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 } 00:27:52.882 EOF 00:27:52.882 )") 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.882 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.882 { 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme$subsystem", 00:27:52.882 "trtype": "$TEST_TRANSPORT", 00:27:52.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "$NVMF_PORT", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.882 "hdgst": ${hdgst:-false}, 00:27:52.882 "ddgst": ${ddgst:-false} 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 } 00:27:52.882 EOF 00:27:52.882 )") 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.882 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.882 { 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme$subsystem", 00:27:52.882 "trtype": "$TEST_TRANSPORT", 00:27:52.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "$NVMF_PORT", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.882 "hdgst": ${hdgst:-false}, 00:27:52.882 "ddgst": ${ddgst:-false} 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 } 00:27:52.882 EOF 00:27:52.882 )") 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.882 17:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:52.882 { 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme$subsystem", 00:27:52.882 "trtype": "$TEST_TRANSPORT", 00:27:52.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "$NVMF_PORT", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.882 "hdgst": ${hdgst:-false}, 00:27:52.882 "ddgst": ${ddgst:-false} 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 } 00:27:52.882 EOF 00:27:52.882 )") 00:27:52.882 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.882 17:38:01 -- nvmf/common.sh@542 -- # cat 00:27:52.882 17:38:01 -- nvmf/common.sh@544 -- # jq . 00:27:52.882 17:38:01 -- nvmf/common.sh@545 -- # IFS=, 00:27:52.882 17:38:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme1", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme2", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme3", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme4", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme5", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme6", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme7", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme8", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme9", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 },{ 00:27:52.882 "params": { 00:27:52.882 "name": "Nvme10", 00:27:52.882 "trtype": "tcp", 00:27:52.882 "traddr": "10.0.0.2", 00:27:52.882 "adrfam": "ipv4", 00:27:52.882 "trsvcid": "4420", 00:27:52.882 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:52.882 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:52.882 "hdgst": false, 00:27:52.882 "ddgst": false 00:27:52.882 }, 00:27:52.882 "method": "bdev_nvme_attach_controller" 00:27:52.882 }' 00:27:52.882 [2024-10-13 17:38:01.357325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.882 [2024-10-13 17:38:01.386154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.267 Running I/O for 1 seconds... 00:27:55.661 00:27:55.661 Latency(us) 00:27:55.661 [2024-10-13T15:38:04.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme1n1 : 1.07 333.28 20.83 0.00 0.00 185883.17 42379.95 151169.71 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme2n1 : 1.08 329.25 20.58 0.00 0.00 189707.49 18240.85 194860.37 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme3n1 : 1.07 332.45 20.78 0.00 0.00 183384.80 43472.21 147674.45 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme4n1 : 1.09 363.18 22.70 0.00 0.00 170212.25 9611.95 145053.01 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme5n1 : 1.07 331.66 20.73 0.00 0.00 182493.15 30801.92 143305.39 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme6n1 : 1.12 324.92 20.31 0.00 0.00 179983.35 2949.12 174762.67 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme7n1 : 1.09 363.84 22.74 0.00 0.00 166197.43 10977.28 141557.76 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme8n1 : 1.08 329.96 20.62 0.00 0.00 180616.14 18459.31 141557.76 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme9n1 : 1.08 327.22 20.45 0.00 0.00 181514.34 12288.00 153791.15 00:27:55.661 [2024-10-13T15:38:04.185Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.661 Verification LBA range: start 0x0 length 0x400 00:27:55.661 Nvme10n1 : 1.09 330.35 20.65 0.00 0.00 178513.80 11468.80 153791.15 00:27:55.661 [2024-10-13T15:38:04.185Z] =================================================================================================================== 00:27:55.661 [2024-10-13T15:38:04.185Z] Total : 3366.11 210.38 0.00 0.00 179580.23 2949.12 194860.37 00:27:55.661 17:38:03 -- target/shutdown.sh@93 -- # stoptarget 00:27:55.661 17:38:03 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:55.661 17:38:03 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:55.661 17:38:03 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:55.661 17:38:03 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:55.661 17:38:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:55.661 17:38:03 -- nvmf/common.sh@116 -- # sync 00:27:55.661 17:38:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:55.661 17:38:03 -- nvmf/common.sh@119 -- # set +e 00:27:55.661 17:38:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:55.661 17:38:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:55.661 rmmod nvme_tcp 00:27:55.661 rmmod nvme_fabrics 00:27:55.661 rmmod nvme_keyring 00:27:55.661 17:38:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:55.661 17:38:04 -- nvmf/common.sh@123 -- # set -e 00:27:55.661 17:38:04 -- nvmf/common.sh@124 -- # return 0 00:27:55.661 17:38:04 -- nvmf/common.sh@477 -- # '[' -n 3321320 ']' 00:27:55.661 17:38:04 -- nvmf/common.sh@478 -- # killprocess 3321320 00:27:55.661 17:38:04 -- common/autotest_common.sh@926 -- # '[' -z 3321320 ']' 00:27:55.661 17:38:04 -- common/autotest_common.sh@930 -- # kill -0 3321320 00:27:55.661 17:38:04 -- common/autotest_common.sh@931 -- # uname 00:27:55.661 17:38:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.661 17:38:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3321320 00:27:55.661 17:38:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:55.661 17:38:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:55.661 17:38:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3321320' 00:27:55.661 killing process with pid 3321320 00:27:55.661 17:38:04 -- common/autotest_common.sh@945 -- # kill 3321320 00:27:55.661 17:38:04 -- common/autotest_common.sh@950 -- # wait 3321320 00:27:55.923 17:38:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:55.923 17:38:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:55.923 17:38:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:55.923 17:38:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.923 17:38:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:55.923 17:38:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.923 17:38:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.923 17:38:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.470 17:38:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:58.470 00:27:58.470 real 0m17.249s 00:27:58.470 user 0m37.036s 00:27:58.470 sys 0m6.758s 00:27:58.470 17:38:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:58.470 17:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.470 ************************************ 00:27:58.470 END TEST nvmf_shutdown_tc1 00:27:58.470 ************************************ 00:27:58.470 17:38:06 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:58.470 17:38:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:58.470 17:38:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:58.470 17:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.470 ************************************ 00:27:58.470 START TEST nvmf_shutdown_tc2 00:27:58.470 ************************************ 00:27:58.470 17:38:06 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:27:58.470 17:38:06 -- target/shutdown.sh@98 -- # starttarget 00:27:58.470 17:38:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:58.470 17:38:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:58.470 17:38:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.470 17:38:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:58.470 17:38:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:58.470 17:38:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:58.470 17:38:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.470 17:38:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.470 17:38:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.470 17:38:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:58.470 17:38:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:58.470 17:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.470 17:38:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:58.470 17:38:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:58.470 17:38:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:58.470 17:38:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:58.470 17:38:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:58.470 17:38:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:58.470 17:38:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:58.470 17:38:06 -- nvmf/common.sh@294 -- # net_devs=() 00:27:58.470 17:38:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:58.470 17:38:06 -- nvmf/common.sh@295 -- # e810=() 00:27:58.470 17:38:06 -- nvmf/common.sh@295 -- # local -ga e810 00:27:58.470 17:38:06 -- nvmf/common.sh@296 -- # x722=() 00:27:58.470 17:38:06 -- nvmf/common.sh@296 -- # local -ga x722 00:27:58.470 17:38:06 -- nvmf/common.sh@297 -- # mlx=() 00:27:58.470 17:38:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:58.470 17:38:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.470 17:38:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:58.470 17:38:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:58.470 17:38:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:58.470 17:38:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:58.470 17:38:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:58.470 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:58.470 17:38:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:58.470 17:38:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:58.470 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:58.470 17:38:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:58.470 17:38:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:58.470 17:38:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.470 17:38:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:58.470 17:38:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.470 17:38:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:58.470 Found net devices under 0000:31:00.0: cvl_0_0 00:27:58.470 17:38:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.470 17:38:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:58.470 17:38:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.470 17:38:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:58.470 17:38:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.470 17:38:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:58.470 Found net devices under 0000:31:00.1: cvl_0_1 00:27:58.470 17:38:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.470 17:38:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:58.470 17:38:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:58.470 17:38:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:58.470 17:38:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:58.470 17:38:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.470 17:38:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.470 17:38:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.470 17:38:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:58.470 17:38:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.470 17:38:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.470 17:38:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:58.470 17:38:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.470 17:38:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.470 17:38:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:58.470 17:38:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:58.470 17:38:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.470 17:38:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.470 17:38:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.470 17:38:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.471 17:38:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:58.471 17:38:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.471 17:38:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.471 17:38:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.471 17:38:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:58.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:27:58.471 00:27:58.471 --- 10.0.0.2 ping statistics --- 00:27:58.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.471 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:27:58.471 17:38:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:27:58.471 00:27:58.471 --- 10.0.0.1 ping statistics --- 00:27:58.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.471 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:27:58.471 17:38:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.471 17:38:06 -- nvmf/common.sh@410 -- # return 0 00:27:58.471 17:38:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:58.471 17:38:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.471 17:38:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:58.471 17:38:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:58.471 17:38:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.471 17:38:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:58.471 17:38:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:58.471 17:38:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:58.471 17:38:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:58.471 17:38:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:58.471 17:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.471 17:38:06 -- nvmf/common.sh@469 -- # nvmfpid=3323388 00:27:58.471 17:38:06 -- nvmf/common.sh@470 -- # waitforlisten 3323388 00:27:58.471 17:38:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:58.471 17:38:06 -- common/autotest_common.sh@819 -- # '[' -z 3323388 ']' 00:27:58.471 17:38:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.471 17:38:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:58.471 17:38:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.471 17:38:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:58.471 17:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.471 [2024-10-13 17:38:06.915256] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:58.471 [2024-10-13 17:38:06.915318] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.471 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.732 [2024-10-13 17:38:07.004424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.732 [2024-10-13 17:38:07.034613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:58.732 [2024-10-13 17:38:07.034718] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.732 [2024-10-13 17:38:07.034727] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.732 [2024-10-13 17:38:07.034733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.732 [2024-10-13 17:38:07.034862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.732 [2024-10-13 17:38:07.035018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.732 [2024-10-13 17:38:07.035161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:58.732 [2024-10-13 17:38:07.035300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.303 17:38:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:59.303 17:38:07 -- common/autotest_common.sh@852 -- # return 0 00:27:59.303 17:38:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:59.303 17:38:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:59.303 17:38:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.303 17:38:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.303 17:38:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.303 17:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.303 17:38:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.303 [2024-10-13 17:38:07.759266] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.303 17:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.303 17:38:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:59.303 17:38:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:59.303 17:38:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:59.303 17:38:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.303 17:38:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.303 17:38:07 -- target/shutdown.sh@28 -- # cat 00:27:59.303 17:38:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:59.303 17:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.303 17:38:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.563 Malloc1 00:27:59.563 [2024-10-13 17:38:07.858065] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.563 Malloc2 00:27:59.563 Malloc3 00:27:59.563 Malloc4 00:27:59.563 Malloc5 00:27:59.563 Malloc6 00:27:59.563 Malloc7 00:27:59.824 Malloc8 00:27:59.824 Malloc9 00:27:59.824 Malloc10 00:27:59.824 17:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.824 17:38:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:59.824 17:38:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:59.824 17:38:08 -- common/autotest_common.sh@10 -- # set +x 00:27:59.824 17:38:08 -- target/shutdown.sh@102 -- # perfpid=3323619 00:27:59.825 17:38:08 -- target/shutdown.sh@103 -- # waitforlisten 3323619 /var/tmp/bdevperf.sock 00:27:59.825 17:38:08 -- common/autotest_common.sh@819 -- # '[' -z 3323619 ']' 00:27:59.825 17:38:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.825 17:38:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:59.825 17:38:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.825 17:38:08 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:59.825 17:38:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:59.825 17:38:08 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:59.825 17:38:08 -- common/autotest_common.sh@10 -- # set +x 00:27:59.825 17:38:08 -- nvmf/common.sh@520 -- # config=() 00:27:59.825 17:38:08 -- nvmf/common.sh@520 -- # local subsystem config 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 [2024-10-13 17:38:08.310016] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:59.825 [2024-10-13 17:38:08.310094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323619 ] 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 17:38:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.825 { 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme$subsystem", 00:27:59.825 "trtype": "$TEST_TRANSPORT", 00:27:59.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "$NVMF_PORT", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.825 "hdgst": ${hdgst:-false}, 00:27:59.825 "ddgst": ${ddgst:-false} 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 } 00:27:59.825 EOF 00:27:59.825 )") 00:27:59.825 17:38:08 -- nvmf/common.sh@542 -- # cat 00:27:59.825 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.825 17:38:08 -- nvmf/common.sh@544 -- # jq . 00:27:59.825 17:38:08 -- nvmf/common.sh@545 -- # IFS=, 00:27:59.825 17:38:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme1", 00:27:59.825 "trtype": "tcp", 00:27:59.825 "traddr": "10.0.0.2", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "4420", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.825 "hdgst": false, 00:27:59.825 "ddgst": false 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 },{ 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme2", 00:27:59.825 "trtype": "tcp", 00:27:59.825 "traddr": "10.0.0.2", 00:27:59.825 "adrfam": "ipv4", 00:27:59.825 "trsvcid": "4420", 00:27:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:59.825 "hdgst": false, 00:27:59.825 "ddgst": false 00:27:59.825 }, 00:27:59.825 "method": "bdev_nvme_attach_controller" 00:27:59.825 },{ 00:27:59.825 "params": { 00:27:59.825 "name": "Nvme3", 00:27:59.825 "trtype": "tcp", 00:27:59.825 "traddr": "10.0.0.2", 00:27:59.825 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme4", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme5", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme6", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme7", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme8", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme9", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 },{ 00:27:59.826 "params": { 00:27:59.826 "name": "Nvme10", 00:27:59.826 "trtype": "tcp", 00:27:59.826 "traddr": "10.0.0.2", 00:27:59.826 "adrfam": "ipv4", 00:27:59.826 "trsvcid": "4420", 00:27:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:59.826 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:59.826 "hdgst": false, 00:27:59.826 "ddgst": false 00:27:59.826 }, 00:27:59.826 "method": "bdev_nvme_attach_controller" 00:27:59.826 }' 00:28:00.088 [2024-10-13 17:38:08.373024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.088 [2024-10-13 17:38:08.402044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.472 Running I/O for 10 seconds... 00:28:02.045 17:38:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:02.045 17:38:10 -- common/autotest_common.sh@852 -- # return 0 00:28:02.045 17:38:10 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:02.045 17:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.045 17:38:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.045 17:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.045 17:38:10 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:02.045 17:38:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:02.045 17:38:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:02.045 17:38:10 -- target/shutdown.sh@57 -- # local ret=1 00:28:02.045 17:38:10 -- target/shutdown.sh@58 -- # local i 00:28:02.045 17:38:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:02.045 17:38:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:02.045 17:38:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:02.045 17:38:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:02.045 17:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.045 17:38:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.045 17:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.045 17:38:10 -- target/shutdown.sh@60 -- # read_io_count=254 00:28:02.045 17:38:10 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:28:02.045 17:38:10 -- target/shutdown.sh@64 -- # ret=0 00:28:02.045 17:38:10 -- target/shutdown.sh@65 -- # break 00:28:02.045 17:38:10 -- target/shutdown.sh@69 -- # return 0 00:28:02.045 17:38:10 -- target/shutdown.sh@109 -- # killprocess 3323619 00:28:02.045 17:38:10 -- common/autotest_common.sh@926 -- # '[' -z 3323619 ']' 00:28:02.045 17:38:10 -- common/autotest_common.sh@930 -- # kill -0 3323619 00:28:02.045 17:38:10 -- common/autotest_common.sh@931 -- # uname 00:28:02.045 17:38:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:02.045 17:38:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3323619 00:28:02.045 17:38:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:02.045 17:38:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:02.045 17:38:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3323619' 00:28:02.045 killing process with pid 3323619 00:28:02.045 17:38:10 -- common/autotest_common.sh@945 -- # kill 3323619 00:28:02.045 17:38:10 -- common/autotest_common.sh@950 -- # wait 3323619 00:28:02.306 Received shutdown signal, test time was about 0.802123 seconds 00:28:02.306 00:28:02.306 Latency(us) 00:28:02.306 [2024-10-13T15:38:10.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme1n1 : 0.76 467.21 29.20 0.00 0.00 135018.54 7755.09 120586.24 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme2n1 : 0.75 428.54 26.78 0.00 0.00 144245.40 9393.49 154664.96 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme3n1 : 0.78 404.23 25.26 0.00 0.00 144241.62 19005.44 125829.12 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme4n1 : 0.74 428.73 26.80 0.00 0.00 141491.99 4669.44 120586.24 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme5n1 : 0.80 443.09 27.69 0.00 0.00 129538.75 10704.21 107915.95 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme6n1 : 0.79 399.11 24.94 0.00 0.00 141311.91 17257.81 116217.17 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme7n1 : 0.76 465.79 29.11 0.00 0.00 126870.23 8901.97 112721.92 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme8n1 : 0.74 424.08 26.51 0.00 0.00 136668.57 16274.77 110974.29 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme9n1 : 0.75 422.11 26.38 0.00 0.00 135570.40 17803.95 110537.39 00:28:02.306 [2024-10-13T15:38:10.830Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.306 Verification LBA range: start 0x0 length 0x400 00:28:02.306 Nvme10n1 : 0.76 417.93 26.12 0.00 0.00 135370.90 11578.03 114469.55 00:28:02.306 [2024-10-13T15:38:10.830Z] =================================================================================================================== 00:28:02.306 [2024-10-13T15:38:10.830Z] Total : 4300.83 268.80 0.00 0.00 136812.81 4669.44 154664.96 00:28:02.306 17:38:10 -- target/shutdown.sh@112 -- # sleep 1 00:28:03.691 17:38:11 -- target/shutdown.sh@113 -- # kill -0 3323388 00:28:03.691 17:38:11 -- target/shutdown.sh@115 -- # stoptarget 00:28:03.691 17:38:11 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:03.691 17:38:11 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:03.691 17:38:11 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:03.691 17:38:11 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:03.691 17:38:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:03.691 17:38:11 -- nvmf/common.sh@116 -- # sync 00:28:03.691 17:38:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:03.691 17:38:11 -- nvmf/common.sh@119 -- # set +e 00:28:03.691 17:38:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:03.691 17:38:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:03.691 rmmod nvme_tcp 00:28:03.691 rmmod nvme_fabrics 00:28:03.691 rmmod nvme_keyring 00:28:03.691 17:38:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:03.691 17:38:11 -- nvmf/common.sh@123 -- # set -e 00:28:03.691 17:38:11 -- nvmf/common.sh@124 -- # return 0 00:28:03.691 17:38:11 -- nvmf/common.sh@477 -- # '[' -n 3323388 ']' 00:28:03.691 17:38:11 -- nvmf/common.sh@478 -- # killprocess 3323388 00:28:03.691 17:38:11 -- common/autotest_common.sh@926 -- # '[' -z 3323388 ']' 00:28:03.691 17:38:11 -- common/autotest_common.sh@930 -- # kill -0 3323388 00:28:03.691 17:38:11 -- common/autotest_common.sh@931 -- # uname 00:28:03.691 17:38:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:03.691 17:38:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3323388 00:28:03.691 17:38:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:03.691 17:38:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:03.691 17:38:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3323388' 00:28:03.691 killing process with pid 3323388 00:28:03.691 17:38:11 -- common/autotest_common.sh@945 -- # kill 3323388 00:28:03.691 17:38:11 -- common/autotest_common.sh@950 -- # wait 3323388 00:28:03.691 17:38:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:03.691 17:38:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:03.691 17:38:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:03.691 17:38:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.691 17:38:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:03.691 17:38:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.691 17:38:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.691 17:38:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.235 17:38:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:06.235 00:28:06.235 real 0m7.778s 00:28:06.235 user 0m23.383s 00:28:06.235 sys 0m1.294s 00:28:06.235 17:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.235 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 ************************************ 00:28:06.235 END TEST nvmf_shutdown_tc2 00:28:06.235 ************************************ 00:28:06.235 17:38:14 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:06.235 17:38:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:06.235 17:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:06.235 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 ************************************ 00:28:06.235 START TEST nvmf_shutdown_tc3 00:28:06.235 ************************************ 00:28:06.235 17:38:14 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:28:06.235 17:38:14 -- target/shutdown.sh@120 -- # starttarget 00:28:06.235 17:38:14 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:06.235 17:38:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:06.235 17:38:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.235 17:38:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:06.235 17:38:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:06.235 17:38:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:06.235 17:38:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.235 17:38:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.235 17:38:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.235 17:38:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:06.235 17:38:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:06.235 17:38:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:06.235 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 17:38:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:06.235 17:38:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:06.235 17:38:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:06.235 17:38:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:06.235 17:38:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:06.235 17:38:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:06.235 17:38:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:06.235 17:38:14 -- nvmf/common.sh@294 -- # net_devs=() 00:28:06.235 17:38:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:06.235 17:38:14 -- nvmf/common.sh@295 -- # e810=() 00:28:06.235 17:38:14 -- nvmf/common.sh@295 -- # local -ga e810 00:28:06.235 17:38:14 -- nvmf/common.sh@296 -- # x722=() 00:28:06.235 17:38:14 -- nvmf/common.sh@296 -- # local -ga x722 00:28:06.235 17:38:14 -- nvmf/common.sh@297 -- # mlx=() 00:28:06.235 17:38:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:06.235 17:38:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.235 17:38:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:06.235 17:38:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:06.235 17:38:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:06.235 17:38:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:06.235 17:38:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:06.235 17:38:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:06.235 17:38:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:06.235 17:38:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:06.236 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:06.236 17:38:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:06.236 17:38:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:06.236 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:06.236 17:38:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:06.236 17:38:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:06.236 17:38:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.236 17:38:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:06.236 17:38:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.236 17:38:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:06.236 Found net devices under 0000:31:00.0: cvl_0_0 00:28:06.236 17:38:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.236 17:38:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:06.236 17:38:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.236 17:38:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:06.236 17:38:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.236 17:38:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:06.236 Found net devices under 0000:31:00.1: cvl_0_1 00:28:06.236 17:38:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.236 17:38:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:06.236 17:38:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:06.236 17:38:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:06.236 17:38:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.236 17:38:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.236 17:38:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.236 17:38:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:06.236 17:38:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.236 17:38:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.236 17:38:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:06.236 17:38:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.236 17:38:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.236 17:38:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:06.236 17:38:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:06.236 17:38:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.236 17:38:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.236 17:38:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.236 17:38:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.236 17:38:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:06.236 17:38:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.236 17:38:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.236 17:38:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.236 17:38:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:06.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:28:06.236 00:28:06.236 --- 10.0.0.2 ping statistics --- 00:28:06.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.236 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:28:06.236 17:38:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:28:06.236 00:28:06.236 --- 10.0.0.1 ping statistics --- 00:28:06.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.236 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:28:06.236 17:38:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.236 17:38:14 -- nvmf/common.sh@410 -- # return 0 00:28:06.236 17:38:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:06.236 17:38:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.236 17:38:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:06.236 17:38:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.236 17:38:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:06.236 17:38:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:06.236 17:38:14 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:06.236 17:38:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:06.236 17:38:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:06.236 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.236 17:38:14 -- nvmf/common.sh@469 -- # nvmfpid=3325071 00:28:06.236 17:38:14 -- nvmf/common.sh@470 -- # waitforlisten 3325071 00:28:06.236 17:38:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:06.236 17:38:14 -- common/autotest_common.sh@819 -- # '[' -z 3325071 ']' 00:28:06.236 17:38:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.236 17:38:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:06.236 17:38:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.236 17:38:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:06.236 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.236 [2024-10-13 17:38:14.746754] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:06.236 [2024-10-13 17:38:14.746818] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.497 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.497 [2024-10-13 17:38:14.835997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.497 [2024-10-13 17:38:14.868118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:06.497 [2024-10-13 17:38:14.868233] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.497 [2024-10-13 17:38:14.868241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.497 [2024-10-13 17:38:14.868247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.497 [2024-10-13 17:38:14.868354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.497 [2024-10-13 17:38:14.868510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.497 [2024-10-13 17:38:14.868638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.497 [2024-10-13 17:38:14.868640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:07.068 17:38:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.068 17:38:15 -- common/autotest_common.sh@852 -- # return 0 00:28:07.068 17:38:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:07.068 17:38:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:07.068 17:38:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.068 17:38:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.068 17:38:15 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.068 17:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.068 17:38:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.068 [2024-10-13 17:38:15.578230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.068 17:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.068 17:38:15 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:07.068 17:38:15 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:07.068 17:38:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:07.068 17:38:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.068 17:38:15 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.327 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.327 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.328 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.328 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.328 17:38:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.328 17:38:15 -- target/shutdown.sh@28 -- # cat 00:28:07.328 17:38:15 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:07.328 17:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.328 17:38:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.328 Malloc1 00:28:07.328 [2024-10-13 17:38:15.677096] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.328 Malloc2 00:28:07.328 Malloc3 00:28:07.328 Malloc4 00:28:07.328 Malloc5 00:28:07.328 Malloc6 00:28:07.588 Malloc7 00:28:07.588 Malloc8 00:28:07.588 Malloc9 00:28:07.588 Malloc10 00:28:07.588 17:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.588 17:38:16 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:07.588 17:38:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:07.588 17:38:16 -- common/autotest_common.sh@10 -- # set +x 00:28:07.588 17:38:16 -- target/shutdown.sh@124 -- # perfpid=3325429 00:28:07.588 17:38:16 -- target/shutdown.sh@125 -- # waitforlisten 3325429 /var/tmp/bdevperf.sock 00:28:07.588 17:38:16 -- common/autotest_common.sh@819 -- # '[' -z 3325429 ']' 00:28:07.588 17:38:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:07.588 17:38:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:07.588 17:38:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:07.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:07.588 17:38:16 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:07.588 17:38:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:07.588 17:38:16 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:07.588 17:38:16 -- common/autotest_common.sh@10 -- # set +x 00:28:07.588 17:38:16 -- nvmf/common.sh@520 -- # config=() 00:28:07.588 17:38:16 -- nvmf/common.sh@520 -- # local subsystem config 00:28:07.588 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.588 { 00:28:07.588 "params": { 00:28:07.588 "name": "Nvme$subsystem", 00:28:07.588 "trtype": "$TEST_TRANSPORT", 00:28:07.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.588 "adrfam": "ipv4", 00:28:07.588 "trsvcid": "$NVMF_PORT", 00:28:07.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.588 "hdgst": ${hdgst:-false}, 00:28:07.588 "ddgst": ${ddgst:-false} 00:28:07.588 }, 00:28:07.588 "method": "bdev_nvme_attach_controller" 00:28:07.588 } 00:28:07.588 EOF 00:28:07.588 )") 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.588 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.588 { 00:28:07.588 "params": { 00:28:07.588 "name": "Nvme$subsystem", 00:28:07.588 "trtype": "$TEST_TRANSPORT", 00:28:07.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.588 "adrfam": "ipv4", 00:28:07.588 "trsvcid": "$NVMF_PORT", 00:28:07.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.588 "hdgst": ${hdgst:-false}, 00:28:07.588 "ddgst": ${ddgst:-false} 00:28:07.588 }, 00:28:07.588 "method": "bdev_nvme_attach_controller" 00:28:07.588 } 00:28:07.588 EOF 00:28:07.588 )") 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.588 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.588 { 00:28:07.588 "params": { 00:28:07.588 "name": "Nvme$subsystem", 00:28:07.588 "trtype": "$TEST_TRANSPORT", 00:28:07.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.588 "adrfam": "ipv4", 00:28:07.588 "trsvcid": "$NVMF_PORT", 00:28:07.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.588 "hdgst": ${hdgst:-false}, 00:28:07.588 "ddgst": ${ddgst:-false} 00:28:07.588 }, 00:28:07.588 "method": "bdev_nvme_attach_controller" 00:28:07.588 } 00:28:07.588 EOF 00:28:07.588 )") 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.588 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.588 { 00:28:07.588 "params": { 00:28:07.588 "name": "Nvme$subsystem", 00:28:07.588 "trtype": "$TEST_TRANSPORT", 00:28:07.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.588 "adrfam": "ipv4", 00:28:07.588 "trsvcid": "$NVMF_PORT", 00:28:07.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.588 "hdgst": ${hdgst:-false}, 00:28:07.588 "ddgst": ${ddgst:-false} 00:28:07.588 }, 00:28:07.588 "method": "bdev_nvme_attach_controller" 00:28:07.588 } 00:28:07.588 EOF 00:28:07.588 )") 00:28:07.588 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.588 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.850 { 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme$subsystem", 00:28:07.850 "trtype": "$TEST_TRANSPORT", 00:28:07.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "$NVMF_PORT", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.850 "hdgst": ${hdgst:-false}, 00:28:07.850 "ddgst": ${ddgst:-false} 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 } 00:28:07.850 EOF 00:28:07.850 )") 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.850 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.850 { 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme$subsystem", 00:28:07.850 "trtype": "$TEST_TRANSPORT", 00:28:07.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "$NVMF_PORT", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.850 "hdgst": ${hdgst:-false}, 00:28:07.850 "ddgst": ${ddgst:-false} 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 } 00:28:07.850 EOF 00:28:07.850 )") 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.850 [2024-10-13 17:38:16.123479] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:07.850 [2024-10-13 17:38:16.123531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325429 ] 00:28:07.850 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.850 { 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme$subsystem", 00:28:07.850 "trtype": "$TEST_TRANSPORT", 00:28:07.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "$NVMF_PORT", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.850 "hdgst": ${hdgst:-false}, 00:28:07.850 "ddgst": ${ddgst:-false} 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 } 00:28:07.850 EOF 00:28:07.850 )") 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.850 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.850 { 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme$subsystem", 00:28:07.850 "trtype": "$TEST_TRANSPORT", 00:28:07.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "$NVMF_PORT", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.850 "hdgst": ${hdgst:-false}, 00:28:07.850 "ddgst": ${ddgst:-false} 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 } 00:28:07.850 EOF 00:28:07.850 )") 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.850 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.850 { 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme$subsystem", 00:28:07.850 "trtype": "$TEST_TRANSPORT", 00:28:07.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "$NVMF_PORT", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.850 "hdgst": ${hdgst:-false}, 00:28:07.850 "ddgst": ${ddgst:-false} 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 } 00:28:07.850 EOF 00:28:07.850 )") 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.850 17:38:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:07.850 { 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme$subsystem", 00:28:07.850 "trtype": "$TEST_TRANSPORT", 00:28:07.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "$NVMF_PORT", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.850 "hdgst": ${hdgst:-false}, 00:28:07.850 "ddgst": ${ddgst:-false} 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 } 00:28:07.850 EOF 00:28:07.850 )") 00:28:07.850 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.850 17:38:16 -- nvmf/common.sh@542 -- # cat 00:28:07.850 17:38:16 -- nvmf/common.sh@544 -- # jq . 00:28:07.850 17:38:16 -- nvmf/common.sh@545 -- # IFS=, 00:28:07.850 17:38:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme1", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme2", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme3", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme4", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme5", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme6", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme7", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:07.850 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:07.850 "hdgst": false, 00:28:07.850 "ddgst": false 00:28:07.850 }, 00:28:07.850 "method": "bdev_nvme_attach_controller" 00:28:07.850 },{ 00:28:07.850 "params": { 00:28:07.850 "name": "Nvme8", 00:28:07.850 "trtype": "tcp", 00:28:07.850 "traddr": "10.0.0.2", 00:28:07.850 "adrfam": "ipv4", 00:28:07.850 "trsvcid": "4420", 00:28:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:07.851 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:07.851 "hdgst": false, 00:28:07.851 "ddgst": false 00:28:07.851 }, 00:28:07.851 "method": "bdev_nvme_attach_controller" 00:28:07.851 },{ 00:28:07.851 "params": { 00:28:07.851 "name": "Nvme9", 00:28:07.851 "trtype": "tcp", 00:28:07.851 "traddr": "10.0.0.2", 00:28:07.851 "adrfam": "ipv4", 00:28:07.851 "trsvcid": "4420", 00:28:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:07.851 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:07.851 "hdgst": false, 00:28:07.851 "ddgst": false 00:28:07.851 }, 00:28:07.851 "method": "bdev_nvme_attach_controller" 00:28:07.851 },{ 00:28:07.851 "params": { 00:28:07.851 "name": "Nvme10", 00:28:07.851 "trtype": "tcp", 00:28:07.851 "traddr": "10.0.0.2", 00:28:07.851 "adrfam": "ipv4", 00:28:07.851 "trsvcid": "4420", 00:28:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:07.851 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:07.851 "hdgst": false, 00:28:07.851 "ddgst": false 00:28:07.851 }, 00:28:07.851 "method": "bdev_nvme_attach_controller" 00:28:07.851 }' 00:28:07.851 [2024-10-13 17:38:16.185782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.851 [2024-10-13 17:38:16.214762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.235 Running I/O for 10 seconds... 00:28:09.235 17:38:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:09.235 17:38:17 -- common/autotest_common.sh@852 -- # return 0 00:28:09.235 17:38:17 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:09.235 17:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.235 17:38:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.235 17:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.235 17:38:17 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.235 17:38:17 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:09.235 17:38:17 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:09.235 17:38:17 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:09.235 17:38:17 -- target/shutdown.sh@57 -- # local ret=1 00:28:09.235 17:38:17 -- target/shutdown.sh@58 -- # local i 00:28:09.235 17:38:17 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:09.235 17:38:17 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:09.235 17:38:17 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:09.235 17:38:17 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:09.235 17:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.235 17:38:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.235 17:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.235 17:38:17 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:09.235 17:38:17 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:09.235 17:38:17 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:09.495 17:38:17 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:09.495 17:38:17 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:09.495 17:38:17 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:09.495 17:38:17 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:09.495 17:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.495 17:38:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.495 17:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.775 17:38:18 -- target/shutdown.sh@60 -- # read_io_count=129 00:28:09.775 17:38:18 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:28:09.775 17:38:18 -- target/shutdown.sh@64 -- # ret=0 00:28:09.775 17:38:18 -- target/shutdown.sh@65 -- # break 00:28:09.775 17:38:18 -- target/shutdown.sh@69 -- # return 0 00:28:09.775 17:38:18 -- target/shutdown.sh@134 -- # killprocess 3325071 00:28:09.775 17:38:18 -- common/autotest_common.sh@926 -- # '[' -z 3325071 ']' 00:28:09.775 17:38:18 -- common/autotest_common.sh@930 -- # kill -0 3325071 00:28:09.775 17:38:18 -- common/autotest_common.sh@931 -- # uname 00:28:09.775 17:38:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:09.775 17:38:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3325071 00:28:09.775 17:38:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:09.775 17:38:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:09.775 17:38:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3325071' 00:28:09.775 killing process with pid 3325071 00:28:09.775 17:38:18 -- common/autotest_common.sh@945 -- # kill 3325071 00:28:09.775 17:38:18 -- common/autotest_common.sh@950 -- # wait 3325071 00:28:09.775 [2024-10-13 17:38:18.094994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.775 [2024-10-13 17:38:18.095239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.095346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643c90 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.098999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.776 [2024-10-13 17:38:18.099103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.099107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.099112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.099117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.099122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.099126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646680 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.101400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16445f0 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.777 [2024-10-13 17:38:18.102451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.102673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644a80 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.778 [2024-10-13 17:38:18.103376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.103510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644f30 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.779 [2024-10-13 17:38:18.104438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.104508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16453e0 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.780 [2024-10-13 17:38:18.105496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.105564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645870 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.106438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.781 [2024-10-13 17:38:18.109494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.781 [2024-10-13 17:38:18.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.781 [2024-10-13 17:38:18.109539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.781 [2024-10-13 17:38:18.109547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.781 [2024-10-13 17:38:18.109555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.781 [2024-10-13 17:38:18.109562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.781 [2024-10-13 17:38:18.109570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.781 [2024-10-13 17:38:18.109577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe800d0 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.109615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89540 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.109707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4acb0 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.109791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b080 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.109876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82950 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.109961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.109985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.109994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89ec0 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.110055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9960 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.110156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e9c0 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.110241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.782 [2024-10-13 17:38:18.110296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47f50 is same with the state(5) to be set 00:28:09.782 [2024-10-13 17:38:18.110383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.782 [2024-10-13 17:38:18.110394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.782 [2024-10-13 17:38:18.110416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.782 [2024-10-13 17:38:18.110434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.782 [2024-10-13 17:38:18.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.782 [2024-10-13 17:38:18.110454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.110985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.110994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.783 [2024-10-13 17:38:18.111119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.783 [2024-10-13 17:38:18.111126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.111480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.111489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdbc80 is same with the state(5) to be set 00:28:09.784 [2024-10-13 17:38:18.111529] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfdbc80 was disconnected and freed. reset controller. 00:28:09.784 [2024-10-13 17:38:18.113408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.784 [2024-10-13 17:38:18.113640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.784 [2024-10-13 17:38:18.113647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.113989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.113996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.114005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.785 [2024-10-13 17:38:18.114012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.785 [2024-10-13 17:38:18.115746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645d20 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.785 [2024-10-13 17:38:18.116506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.116609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16461d0 is same with the state(5) to be set 00:28:09.786 [2024-10-13 17:38:18.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.786 [2024-10-13 17:38:18.125659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.786 [2024-10-13 17:38:18.125732] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe30c10 was disconnected and freed. reset controller. 00:28:09.786 [2024-10-13 17:38:18.126029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:09.787 [2024-10-13 17:38:18.126075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100e9c0 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe800d0 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89540 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4acb0 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b080 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe82950 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89ec0 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.787 [2024-10-13 17:38:18.126231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.126240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.787 [2024-10-13 17:38:18.126247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.126255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.787 [2024-10-13 17:38:18.126262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.126274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.787 [2024-10-13 17:38:18.126281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.126289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebe420 is same with the state(5) to be set 00:28:09.787 [2024-10-13 17:38:18.126306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9960 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.126325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47f50 (9): Bad file descriptor 00:28:09.787 [2024-10-13 17:38:18.127819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.127983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.127994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.787 [2024-10-13 17:38:18.128361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.787 [2024-10-13 17:38:18.128370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.128936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.788 [2024-10-13 17:38:18.128944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.788 [2024-10-13 17:38:18.129002] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe34dc0 was disconnected and freed. reset controller. 00:28:09.788 [2024-10-13 17:38:18.129069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:09.788 [2024-10-13 17:38:18.130765] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:09.788 [2024-10-13 17:38:18.131304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.788 [2024-10-13 17:38:18.131645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.788 [2024-10-13 17:38:18.131658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100e9c0 with addr=10.0.0.2, port=4420 00:28:09.788 [2024-10-13 17:38:18.131668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e9c0 is same with the state(5) to be set 00:28:09.789 [2024-10-13 17:38:18.132007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.789 [2024-10-13 17:38:18.132393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.789 [2024-10-13 17:38:18.132430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe82950 with addr=10.0.0.2, port=4420 00:28:09.789 [2024-10-13 17:38:18.132441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82950 is same with the state(5) to be set 00:28:09.789 [2024-10-13 17:38:18.132504] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:09.789 [2024-10-13 17:38:18.132556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.132991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.132999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.789 [2024-10-13 17:38:18.133141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-10-13 17:38:18.133149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-10-13 17:38:18.133679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.790 [2024-10-13 17:38:18.133687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe80d0 is same with the state(5) to be set 00:28:09.790 [2024-10-13 17:38:18.133736] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfe80d0 was disconnected and freed. reset controller. 00:28:09.790 [2024-10-13 17:38:18.133782] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:09.790 [2024-10-13 17:38:18.133822] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:09.790 [2024-10-13 17:38:18.134132] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:09.790 [2024-10-13 17:38:18.134224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:09.790 [2024-10-13 17:38:18.134243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebe420 (9): Bad file descriptor 00:28:09.790 [2024-10-13 17:38:18.134260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100e9c0 (9): Bad file descriptor 00:28:09.790 [2024-10-13 17:38:18.134270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe82950 (9): Bad file descriptor 00:28:09.790 [2024-10-13 17:38:18.135547] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:09.790 [2024-10-13 17:38:18.135826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:09.790 [2024-10-13 17:38:18.135855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:09.790 [2024-10-13 17:38:18.135864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:09.790 [2024-10-13 17:38:18.135874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:09.790 [2024-10-13 17:38:18.135889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:09.790 [2024-10-13 17:38:18.135897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:09.790 [2024-10-13 17:38:18.135905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:09.790 [2024-10-13 17:38:18.135991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.790 [2024-10-13 17:38:18.136001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.790 [2024-10-13 17:38:18.136406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.790 [2024-10-13 17:38:18.136801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.790 [2024-10-13 17:38:18.136815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebe420 with addr=10.0.0.2, port=4420 00:28:09.791 [2024-10-13 17:38:18.136825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebe420 is same with the state(5) to be set 00:28:09.791 [2024-10-13 17:38:18.137038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.791 [2024-10-13 17:38:18.137398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.791 [2024-10-13 17:38:18.137409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4acb0 with addr=10.0.0.2, port=4420 00:28:09.791 [2024-10-13 17:38:18.137417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4acb0 is same with the state(5) to be set 00:28:09.791 [2024-10-13 17:38:18.137737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebe420 (9): Bad file descriptor 00:28:09.791 [2024-10-13 17:38:18.137751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4acb0 (9): Bad file descriptor 00:28:09.791 [2024-10-13 17:38:18.137864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:09.791 [2024-10-13 17:38:18.137874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:09.791 [2024-10-13 17:38:18.137882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:09.791 [2024-10-13 17:38:18.137894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:09.791 [2024-10-13 17:38:18.137900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:09.791 [2024-10-13 17:38:18.137907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:09.791 [2024-10-13 17:38:18.137937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.137948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.137962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.137975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.137986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.137994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.791 [2024-10-13 17:38:18.138512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.791 [2024-10-13 17:38:18.138519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.138990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.138998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.139007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.139015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.139024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.139031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.139040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.139048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.139058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17c00 is same with the state(5) to be set 00:28:09.792 [2024-10-13 17:38:18.140335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.792 [2024-10-13 17:38:18.140511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.792 [2024-10-13 17:38:18.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.140985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.140993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.141005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.141012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.141021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.141029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.141038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.141046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.141055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.793 [2024-10-13 17:38:18.141068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.793 [2024-10-13 17:38:18.141077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.141461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.141469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e050 is same with the state(5) to be set 00:28:09.794 [2024-10-13 17:38:18.142706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.142989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.794 [2024-10-13 17:38:18.142996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.794 [2024-10-13 17:38:18.143006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.795 [2024-10-13 17:38:18.143718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.795 [2024-10-13 17:38:18.143728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.143736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.143745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.143752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.143763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.143770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.143788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.143797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.143804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.143814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.143822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.143829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f630 is same with the state(5) to be set 00:28:09.796 [2024-10-13 17:38:18.145086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.796 [2024-10-13 17:38:18.145640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.796 [2024-10-13 17:38:18.145650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.145990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.145998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.146191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.146199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe321f0 is same with the state(5) to be set 00:28:09.797 [2024-10-13 17:38:18.147433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.147448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.147460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.147469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.147480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.147489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.147500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.147509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.147521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.147529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.797 [2024-10-13 17:38:18.147541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.797 [2024-10-13 17:38:18.147549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.147987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.147996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.798 [2024-10-13 17:38:18.148140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.798 [2024-10-13 17:38:18.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.148560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.148568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe337d0 is same with the state(5) to be set 00:28:09.799 [2024-10-13 17:38:18.150747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.150991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.150998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.151008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.151015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.151025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.151033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.799 [2024-10-13 17:38:18.151042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.799 [2024-10-13 17:38:18.151050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.800 [2024-10-13 17:38:18.151690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.800 [2024-10-13 17:38:18.151700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.801 [2024-10-13 17:38:18.151861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.801 [2024-10-13 17:38:18.151869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe363a0 is same with the state(5) to be set 00:28:09.801 [2024-10-13 17:38:18.153331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.801 [2024-10-13 17:38:18.153349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.801 [2024-10-13 17:38:18.153357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.801 [2024-10-13 17:38:18.153368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:09.801 [2024-10-13 17:38:18.153377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:09.801 [2024-10-13 17:38:18.153450] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.801 [2024-10-13 17:38:18.153464] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.801 [2024-10-13 17:38:18.153480] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.801 [2024-10-13 17:38:18.153548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:09.801 [2024-10-13 17:38:18.153559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:09.801 task offset: 29184 on job bdev=Nvme2n1 fails 00:28:09.801 00:28:09.801 Latency(us) 00:28:09.801 [2024-10-13T15:38:18.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme1n1 ended in about 0.59 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme1n1 : 0.59 278.48 17.41 108.68 0.00 163965.51 92624.21 161655.47 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme2n1 ended in about 0.56 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme2n1 : 0.56 370.10 23.13 113.88 0.00 129213.74 11141.12 126702.93 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme3n1 ended in about 0.58 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme3n1 : 0.58 356.08 22.25 109.56 0.00 132556.20 59856.21 138936.32 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme4n1 ended in about 0.59 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme4n1 : 0.59 351.77 21.99 108.24 0.00 132404.40 58108.59 143305.39 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme5n1 ended in about 0.59 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme5n1 : 0.59 350.37 21.90 107.81 0.00 131123.40 75584.85 104420.69 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme6n1 ended in about 0.58 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme6n1 : 0.58 360.87 22.55 111.04 0.00 125225.16 33423.36 106605.23 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme7n1 ended in about 0.60 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme7n1 : 0.60 348.99 21.81 107.38 0.00 128044.17 66846.72 103109.97 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme8n1 ended in about 0.60 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme8n1 : 0.60 347.61 21.73 106.96 0.00 126736.67 67720.53 101362.35 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme9n1 ended in about 0.58 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme9n1 : 0.58 358.98 22.44 110.46 0.00 120435.25 20425.39 113595.73 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.801 [2024-10-13T15:38:18.325Z] Job: Nvme10n1 ended in about 0.60 seconds with error 00:28:09.801 Verification LBA range: start 0x0 length 0x400 00:28:09.801 Nvme10n1 : 0.60 272.57 17.04 106.37 0.00 147713.74 91313.49 125829.12 00:28:09.801 [2024-10-13T15:38:18.325Z] =================================================================================================================== 00:28:09.801 [2024-10-13T15:38:18.325Z] Total : 3395.82 212.24 1090.36 0.00 133002.99 11141.12 161655.47 00:28:09.801 [2024-10-13 17:38:18.180500] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:09.801 [2024-10-13 17:38:18.180550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:09.801 [2024-10-13 17:38:18.180897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.181092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.181104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe47f50 with addr=10.0.0.2, port=4420 00:28:09.801 [2024-10-13 17:38:18.181115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47f50 is same with the state(5) to be set 00:28:09.801 [2024-10-13 17:38:18.181193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.181545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.181554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6b080 with addr=10.0.0.2, port=4420 00:28:09.801 [2024-10-13 17:38:18.181562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b080 is same with the state(5) to be set 00:28:09.801 [2024-10-13 17:38:18.181631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.181966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.181975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe800d0 with addr=10.0.0.2, port=4420 00:28:09.801 [2024-10-13 17:38:18.181982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe800d0 is same with the state(5) to be set 00:28:09.801 [2024-10-13 17:38:18.183585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:09.801 [2024-10-13 17:38:18.183601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:09.801 [2024-10-13 17:38:18.183967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.801 [2024-10-13 17:38:18.184185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.184196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89540 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.184204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89540 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.184519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.184704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.184713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9960 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.184721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9960 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.184770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.185049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.185058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89ec0 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.185071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89ec0 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.185082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47f50 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.185094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b080 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.185103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe800d0 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.185132] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.802 [2024-10-13 17:38:18.185144] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.802 [2024-10-13 17:38:18.185158] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.802 [2024-10-13 17:38:18.185179] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.802 [2024-10-13 17:38:18.185190] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:09.802 [2024-10-13 17:38:18.185255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:09.802 [2024-10-13 17:38:18.185265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:09.802 [2024-10-13 17:38:18.185639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.185833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.185842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe82950 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.185849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82950 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.186049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.186460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.186470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100e9c0 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.186477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e9c0 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.186486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89540 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.186495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9960 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.186504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89ec0 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.186512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.186519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.186527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.802 [2024-10-13 17:38:18.186538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.186545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.186552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:09.802 [2024-10-13 17:38:18.186561] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.186568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.186575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:09.802 [2024-10-13 17:38:18.186644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.186652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.186658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.186967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.187281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.187292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4acb0 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.187299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4acb0 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.187632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.187990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.802 [2024-10-13 17:38:18.188000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebe420 with addr=10.0.0.2, port=4420 00:28:09.802 [2024-10-13 17:38:18.188007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebe420 is same with the state(5) to be set 00:28:09.802 [2024-10-13 17:38:18.188016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe82950 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.188026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100e9c0 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.188034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188057] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.188143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.188149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.188157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4acb0 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.188166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebe420 (9): Bad file descriptor 00:28:09.802 [2024-10-13 17:38:18.188174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.188256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.802 [2024-10-13 17:38:18.188262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:09.802 [2024-10-13 17:38:18.188295] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:09.802 [2024-10-13 17:38:18.188302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:09.802 [2024-10-13 17:38:18.188329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.803 [2024-10-13 17:38:18.188336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.063 17:38:18 -- target/shutdown.sh@135 -- # nvmfpid= 00:28:10.063 17:38:18 -- target/shutdown.sh@138 -- # sleep 1 00:28:11.003 17:38:19 -- target/shutdown.sh@141 -- # kill -9 3325429 00:28:11.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3325429) - No such process 00:28:11.003 17:38:19 -- target/shutdown.sh@141 -- # true 00:28:11.003 17:38:19 -- target/shutdown.sh@143 -- # stoptarget 00:28:11.003 17:38:19 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:11.003 17:38:19 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:11.003 17:38:19 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:11.003 17:38:19 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:11.003 17:38:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:11.003 17:38:19 -- nvmf/common.sh@116 -- # sync 00:28:11.003 17:38:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:11.003 17:38:19 -- nvmf/common.sh@119 -- # set +e 00:28:11.003 17:38:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:11.003 17:38:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:11.003 rmmod nvme_tcp 00:28:11.003 rmmod nvme_fabrics 00:28:11.003 rmmod nvme_keyring 00:28:11.003 17:38:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:11.003 17:38:19 -- nvmf/common.sh@123 -- # set -e 00:28:11.003 17:38:19 -- nvmf/common.sh@124 -- # return 0 00:28:11.003 17:38:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:28:11.003 17:38:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:11.003 17:38:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:11.003 17:38:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:11.003 17:38:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.003 17:38:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:11.003 17:38:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.003 17:38:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.003 17:38:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.047 17:38:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:13.047 00:28:13.047 real 0m7.229s 00:28:13.047 user 0m16.438s 00:28:13.047 sys 0m1.137s 00:28:13.047 17:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.047 17:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.047 ************************************ 00:28:13.047 END TEST nvmf_shutdown_tc3 00:28:13.047 ************************************ 00:28:13.315 17:38:21 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:28:13.315 00:28:13.315 real 0m32.528s 00:28:13.315 user 1m16.966s 00:28:13.315 sys 0m9.382s 00:28:13.315 17:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.315 17:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.315 ************************************ 00:28:13.315 END TEST nvmf_shutdown 00:28:13.315 ************************************ 00:28:13.315 17:38:21 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:13.315 17:38:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:13.315 17:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.315 17:38:21 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:13.315 17:38:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:13.315 17:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.315 17:38:21 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:13.315 17:38:21 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:13.315 17:38:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:13.315 17:38:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.315 17:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.315 ************************************ 00:28:13.315 START TEST nvmf_multicontroller 00:28:13.315 ************************************ 00:28:13.315 17:38:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:13.315 * Looking for test storage... 00:28:13.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.315 17:38:21 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.315 17:38:21 -- nvmf/common.sh@7 -- # uname -s 00:28:13.315 17:38:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.315 17:38:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.315 17:38:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.315 17:38:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.315 17:38:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.315 17:38:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.315 17:38:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.315 17:38:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.315 17:38:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.315 17:38:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.315 17:38:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:13.315 17:38:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:13.315 17:38:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.315 17:38:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.315 17:38:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.315 17:38:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.315 17:38:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.315 17:38:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.315 17:38:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.315 17:38:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.315 17:38:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.315 17:38:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.315 17:38:21 -- paths/export.sh@5 -- # export PATH 00:28:13.315 17:38:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.315 17:38:21 -- nvmf/common.sh@46 -- # : 0 00:28:13.315 17:38:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:13.315 17:38:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:13.315 17:38:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:13.315 17:38:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.315 17:38:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.315 17:38:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:13.315 17:38:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:13.315 17:38:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:13.315 17:38:21 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:13.315 17:38:21 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:13.315 17:38:21 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:13.315 17:38:21 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:13.315 17:38:21 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:13.315 17:38:21 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:13.315 17:38:21 -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:13.315 17:38:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:13.315 17:38:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.315 17:38:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:13.315 17:38:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:13.315 17:38:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:13.315 17:38:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.315 17:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.315 17:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.315 17:38:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:13.315 17:38:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:13.315 17:38:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:13.315 17:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:21.451 17:38:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:21.451 17:38:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:21.451 17:38:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:21.451 17:38:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:21.451 17:38:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:21.451 17:38:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:21.451 17:38:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:21.451 17:38:28 -- nvmf/common.sh@294 -- # net_devs=() 00:28:21.451 17:38:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:21.451 17:38:28 -- nvmf/common.sh@295 -- # e810=() 00:28:21.451 17:38:28 -- nvmf/common.sh@295 -- # local -ga e810 00:28:21.451 17:38:28 -- nvmf/common.sh@296 -- # x722=() 00:28:21.451 17:38:28 -- nvmf/common.sh@296 -- # local -ga x722 00:28:21.451 17:38:28 -- nvmf/common.sh@297 -- # mlx=() 00:28:21.451 17:38:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:21.451 17:38:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.451 17:38:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.451 17:38:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.451 17:38:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.451 17:38:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.451 17:38:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.451 17:38:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.452 17:38:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.452 17:38:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.452 17:38:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.452 17:38:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.452 17:38:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:21.452 17:38:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:21.452 17:38:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:21.452 17:38:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:21.452 17:38:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:21.452 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:21.452 17:38:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:21.452 17:38:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:21.452 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:21.452 17:38:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:21.452 17:38:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:21.452 17:38:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.452 17:38:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:21.452 17:38:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.452 17:38:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:21.452 Found net devices under 0000:31:00.0: cvl_0_0 00:28:21.452 17:38:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.452 17:38:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:21.452 17:38:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.452 17:38:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:21.452 17:38:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.452 17:38:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:21.452 Found net devices under 0000:31:00.1: cvl_0_1 00:28:21.452 17:38:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.452 17:38:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:21.452 17:38:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:21.452 17:38:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:21.452 17:38:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:21.452 17:38:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.452 17:38:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.452 17:38:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.452 17:38:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:21.452 17:38:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.452 17:38:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.452 17:38:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:21.452 17:38:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.452 17:38:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.452 17:38:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:21.452 17:38:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:21.452 17:38:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.452 17:38:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.452 17:38:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.452 17:38:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.452 17:38:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:21.452 17:38:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.452 17:38:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.452 17:38:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.452 17:38:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:21.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:28:21.452 00:28:21.452 --- 10.0.0.2 ping statistics --- 00:28:21.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.452 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:28:21.452 17:38:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:28:21.452 00:28:21.452 --- 10.0.0.1 ping statistics --- 00:28:21.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.452 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:21.452 17:38:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.452 17:38:29 -- nvmf/common.sh@410 -- # return 0 00:28:21.452 17:38:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:21.452 17:38:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.452 17:38:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:21.452 17:38:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:21.452 17:38:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.452 17:38:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:21.452 17:38:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:21.452 17:38:29 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:21.452 17:38:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:21.452 17:38:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:21.452 17:38:29 -- common/autotest_common.sh@10 -- # set +x 00:28:21.452 17:38:29 -- nvmf/common.sh@469 -- # nvmfpid=3330278 00:28:21.452 17:38:29 -- nvmf/common.sh@470 -- # waitforlisten 3330278 00:28:21.452 17:38:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:21.452 17:38:29 -- common/autotest_common.sh@819 -- # '[' -z 3330278 ']' 00:28:21.452 17:38:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.452 17:38:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:21.452 17:38:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.452 17:38:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:21.452 17:38:29 -- common/autotest_common.sh@10 -- # set +x 00:28:21.452 [2024-10-13 17:38:29.266763] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:21.452 [2024-10-13 17:38:29.266822] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.452 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.452 [2024-10-13 17:38:29.354210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:21.452 [2024-10-13 17:38:29.383386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:21.452 [2024-10-13 17:38:29.383507] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.452 [2024-10-13 17:38:29.383515] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.452 [2024-10-13 17:38:29.383523] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.452 [2024-10-13 17:38:29.383647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.452 [2024-10-13 17:38:29.383802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.452 [2024-10-13 17:38:29.383803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.713 17:38:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:21.713 17:38:30 -- common/autotest_common.sh@852 -- # return 0 00:28:21.713 17:38:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:21.713 17:38:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 17:38:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.713 17:38:30 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 [2024-10-13 17:38:30.149140] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 Malloc0 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 [2024-10-13 17:38:30.200235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 [2024-10-13 17:38:30.208169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 Malloc1 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.713 17:38:30 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:21.713 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.713 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.973 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.973 17:38:30 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:21.973 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.973 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.973 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.973 17:38:30 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:21.974 17:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.974 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:21.974 17:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.974 17:38:30 -- host/multicontroller.sh@44 -- # bdevperf_pid=3330483 00:28:21.974 17:38:30 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.974 17:38:30 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:21.974 17:38:30 -- host/multicontroller.sh@47 -- # waitforlisten 3330483 /var/tmp/bdevperf.sock 00:28:21.974 17:38:30 -- common/autotest_common.sh@819 -- # '[' -z 3330483 ']' 00:28:21.974 17:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:21.974 17:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:21.974 17:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:21.974 17:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:21.974 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:28:22.914 17:38:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:22.914 17:38:31 -- common/autotest_common.sh@852 -- # return 0 00:28:22.914 17:38:31 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.914 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.914 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.914 NVMe0n1 00:28:22.914 17:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.914 17:38:31 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.914 17:38:31 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:22.914 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.914 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.914 17:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.914 1 00:28:22.914 17:38:31 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.914 17:38:31 -- common/autotest_common.sh@640 -- # local es=0 00:28:22.914 17:38:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.914 17:38:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:22.914 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.914 17:38:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:22.914 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.914 17:38:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.914 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.914 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.914 request: 00:28:22.914 { 00:28:22.914 "name": "NVMe0", 00:28:22.914 "trtype": "tcp", 00:28:22.914 "traddr": "10.0.0.2", 00:28:22.914 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:22.914 "hostaddr": "10.0.0.2", 00:28:22.914 "hostsvcid": "60000", 00:28:22.914 "adrfam": "ipv4", 00:28:22.914 "trsvcid": "4420", 00:28:22.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.914 "method": "bdev_nvme_attach_controller", 00:28:22.914 "req_id": 1 00:28:22.914 } 00:28:22.914 Got JSON-RPC error response 00:28:22.914 response: 00:28:22.914 { 00:28:22.914 "code": -114, 00:28:22.914 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.914 } 00:28:22.915 17:38:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # es=1 00:28:22.915 17:38:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:22.915 17:38:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:22.915 17:38:31 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.915 17:38:31 -- common/autotest_common.sh@640 -- # local es=0 00:28:22.915 17:38:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.915 17:38:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.915 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.915 request: 00:28:22.915 { 00:28:22.915 "name": "NVMe0", 00:28:22.915 "trtype": "tcp", 00:28:22.915 "traddr": "10.0.0.2", 00:28:22.915 "hostaddr": "10.0.0.2", 00:28:22.915 "hostsvcid": "60000", 00:28:22.915 "adrfam": "ipv4", 00:28:22.915 "trsvcid": "4420", 00:28:22.915 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:22.915 "method": "bdev_nvme_attach_controller", 00:28:22.915 "req_id": 1 00:28:22.915 } 00:28:22.915 Got JSON-RPC error response 00:28:22.915 response: 00:28:22.915 { 00:28:22.915 "code": -114, 00:28:22.915 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.915 } 00:28:22.915 17:38:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # es=1 00:28:22.915 17:38:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:22.915 17:38:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:22.915 17:38:31 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@640 -- # local es=0 00:28:22.915 17:38:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.915 request: 00:28:22.915 { 00:28:22.915 "name": "NVMe0", 00:28:22.915 "trtype": "tcp", 00:28:22.915 "traddr": "10.0.0.2", 00:28:22.915 "hostaddr": "10.0.0.2", 00:28:22.915 "hostsvcid": "60000", 00:28:22.915 "adrfam": "ipv4", 00:28:22.915 "trsvcid": "4420", 00:28:22.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.915 "multipath": "disable", 00:28:22.915 "method": "bdev_nvme_attach_controller", 00:28:22.915 "req_id": 1 00:28:22.915 } 00:28:22.915 Got JSON-RPC error response 00:28:22.915 response: 00:28:22.915 { 00:28:22.915 "code": -114, 00:28:22.915 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:22.915 } 00:28:22.915 17:38:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # es=1 00:28:22.915 17:38:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:22.915 17:38:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:22.915 17:38:31 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.915 17:38:31 -- common/autotest_common.sh@640 -- # local es=0 00:28:22.915 17:38:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.915 17:38:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:22.915 17:38:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.915 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.915 request: 00:28:22.915 { 00:28:22.915 "name": "NVMe0", 00:28:22.915 "trtype": "tcp", 00:28:22.915 "traddr": "10.0.0.2", 00:28:22.915 "hostaddr": "10.0.0.2", 00:28:22.915 "hostsvcid": "60000", 00:28:22.915 "adrfam": "ipv4", 00:28:22.915 "trsvcid": "4420", 00:28:22.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.915 "multipath": "failover", 00:28:22.915 "method": "bdev_nvme_attach_controller", 00:28:22.915 "req_id": 1 00:28:22.915 } 00:28:22.915 Got JSON-RPC error response 00:28:22.915 response: 00:28:22.915 { 00:28:22.915 "code": -114, 00:28:22.915 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.915 } 00:28:22.915 17:38:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@643 -- # es=1 00:28:22.915 17:38:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:22.915 17:38:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:22.915 17:38:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:22.915 17:38:31 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:22.915 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.915 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:23.175 00:28:23.175 17:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.175 17:38:31 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.175 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.175 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:23.175 17:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.176 17:38:31 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:23.176 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.176 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:23.436 00:28:23.436 17:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.436 17:38:31 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.436 17:38:31 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:23.436 17:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.436 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:23.436 17:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.436 17:38:31 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:23.436 17:38:31 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:24.376 0 00:28:24.376 17:38:32 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:24.376 17:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.376 17:38:32 -- common/autotest_common.sh@10 -- # set +x 00:28:24.376 17:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.376 17:38:32 -- host/multicontroller.sh@100 -- # killprocess 3330483 00:28:24.376 17:38:32 -- common/autotest_common.sh@926 -- # '[' -z 3330483 ']' 00:28:24.376 17:38:32 -- common/autotest_common.sh@930 -- # kill -0 3330483 00:28:24.376 17:38:32 -- common/autotest_common.sh@931 -- # uname 00:28:24.376 17:38:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.376 17:38:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3330483 00:28:24.636 17:38:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:24.636 17:38:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:24.637 17:38:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3330483' 00:28:24.637 killing process with pid 3330483 00:28:24.637 17:38:32 -- common/autotest_common.sh@945 -- # kill 3330483 00:28:24.637 17:38:32 -- common/autotest_common.sh@950 -- # wait 3330483 00:28:24.637 17:38:33 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.637 17:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.637 17:38:33 -- common/autotest_common.sh@10 -- # set +x 00:28:24.637 17:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.637 17:38:33 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.637 17:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.637 17:38:33 -- common/autotest_common.sh@10 -- # set +x 00:28:24.637 17:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.637 17:38:33 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:24.637 17:38:33 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.637 17:38:33 -- common/autotest_common.sh@1597 -- # read -r file 00:28:24.637 17:38:33 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:24.637 17:38:33 -- common/autotest_common.sh@1596 -- # sort -u 00:28:24.637 17:38:33 -- common/autotest_common.sh@1598 -- # cat 00:28:24.637 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.637 [2024-10-13 17:38:30.305911] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:24.637 [2024-10-13 17:38:30.305968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330483 ] 00:28:24.637 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.637 [2024-10-13 17:38:30.367733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.637 [2024-10-13 17:38:30.397115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.637 [2024-10-13 17:38:31.730749] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 4317e753-0106-4efe-81d4-a95599498773 already exists 00:28:24.637 [2024-10-13 17:38:31.730778] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:4317e753-0106-4efe-81d4-a95599498773 alias for bdev NVMe1n1 00:28:24.637 [2024-10-13 17:38:31.730788] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:24.637 Running I/O for 1 seconds... 00:28:24.637 00:28:24.637 Latency(us) 00:28:24.637 [2024-10-13T15:38:33.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.637 [2024-10-13T15:38:33.161Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:24.637 NVMe0n1 : 1.00 30371.44 118.64 0.00 0.00 4205.74 2307.41 14090.24 00:28:24.637 [2024-10-13T15:38:33.161Z] =================================================================================================================== 00:28:24.637 [2024-10-13T15:38:33.161Z] Total : 30371.44 118.64 0.00 0.00 4205.74 2307.41 14090.24 00:28:24.637 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.637 00:28:24.637 Latency(us) 00:28:24.637 [2024-10-13T15:38:33.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.637 [2024-10-13T15:38:33.161Z] =================================================================================================================== 00:28:24.637 [2024-10-13T15:38:33.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.637 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.637 17:38:33 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.637 17:38:33 -- common/autotest_common.sh@1597 -- # read -r file 00:28:24.637 17:38:33 -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:24.637 17:38:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:24.637 17:38:33 -- nvmf/common.sh@116 -- # sync 00:28:24.637 17:38:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:24.637 17:38:33 -- nvmf/common.sh@119 -- # set +e 00:28:24.637 17:38:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:24.637 17:38:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:24.637 rmmod nvme_tcp 00:28:24.637 rmmod nvme_fabrics 00:28:24.637 rmmod nvme_keyring 00:28:24.897 17:38:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:24.897 17:38:33 -- nvmf/common.sh@123 -- # set -e 00:28:24.897 17:38:33 -- nvmf/common.sh@124 -- # return 0 00:28:24.897 17:38:33 -- nvmf/common.sh@477 -- # '[' -n 3330278 ']' 00:28:24.897 17:38:33 -- nvmf/common.sh@478 -- # killprocess 3330278 00:28:24.897 17:38:33 -- common/autotest_common.sh@926 -- # '[' -z 3330278 ']' 00:28:24.897 17:38:33 -- common/autotest_common.sh@930 -- # kill -0 3330278 00:28:24.897 17:38:33 -- common/autotest_common.sh@931 -- # uname 00:28:24.897 17:38:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.897 17:38:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3330278 00:28:24.897 17:38:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:24.897 17:38:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:24.897 17:38:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3330278' 00:28:24.897 killing process with pid 3330278 00:28:24.897 17:38:33 -- common/autotest_common.sh@945 -- # kill 3330278 00:28:24.897 17:38:33 -- common/autotest_common.sh@950 -- # wait 3330278 00:28:24.897 17:38:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:24.897 17:38:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:24.897 17:38:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:24.897 17:38:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.897 17:38:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:24.897 17:38:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.897 17:38:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.897 17:38:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.441 17:38:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:27.441 00:28:27.441 real 0m13.773s 00:28:27.441 user 0m17.138s 00:28:27.441 sys 0m6.293s 00:28:27.441 17:38:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.441 17:38:35 -- common/autotest_common.sh@10 -- # set +x 00:28:27.441 ************************************ 00:28:27.441 END TEST nvmf_multicontroller 00:28:27.441 ************************************ 00:28:27.441 17:38:35 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:27.441 17:38:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:27.441 17:38:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.441 17:38:35 -- common/autotest_common.sh@10 -- # set +x 00:28:27.441 ************************************ 00:28:27.441 START TEST nvmf_aer 00:28:27.441 ************************************ 00:28:27.441 17:38:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:27.441 * Looking for test storage... 00:28:27.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.441 17:38:35 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.441 17:38:35 -- nvmf/common.sh@7 -- # uname -s 00:28:27.441 17:38:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.441 17:38:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.441 17:38:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.441 17:38:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.441 17:38:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.442 17:38:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.442 17:38:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.442 17:38:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.442 17:38:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.442 17:38:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.442 17:38:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:27.442 17:38:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:27.442 17:38:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.442 17:38:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.442 17:38:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.442 17:38:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.442 17:38:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.442 17:38:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.442 17:38:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.442 17:38:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.442 17:38:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.442 17:38:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.442 17:38:35 -- paths/export.sh@5 -- # export PATH 00:28:27.442 17:38:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.442 17:38:35 -- nvmf/common.sh@46 -- # : 0 00:28:27.442 17:38:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.442 17:38:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.442 17:38:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.442 17:38:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.442 17:38:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.442 17:38:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.442 17:38:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.442 17:38:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.442 17:38:35 -- host/aer.sh@11 -- # nvmftestinit 00:28:27.442 17:38:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:27.442 17:38:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.442 17:38:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:27.442 17:38:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:27.442 17:38:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:27.442 17:38:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.442 17:38:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.442 17:38:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.442 17:38:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:27.442 17:38:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:27.442 17:38:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:27.442 17:38:35 -- common/autotest_common.sh@10 -- # set +x 00:28:35.584 17:38:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:35.585 17:38:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:35.585 17:38:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:35.585 17:38:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:35.585 17:38:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:35.585 17:38:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:35.585 17:38:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:35.585 17:38:42 -- nvmf/common.sh@294 -- # net_devs=() 00:28:35.585 17:38:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:35.585 17:38:42 -- nvmf/common.sh@295 -- # e810=() 00:28:35.585 17:38:42 -- nvmf/common.sh@295 -- # local -ga e810 00:28:35.585 17:38:42 -- nvmf/common.sh@296 -- # x722=() 00:28:35.585 17:38:42 -- nvmf/common.sh@296 -- # local -ga x722 00:28:35.585 17:38:42 -- nvmf/common.sh@297 -- # mlx=() 00:28:35.585 17:38:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:35.585 17:38:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.585 17:38:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:35.585 17:38:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:35.585 17:38:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:35.585 17:38:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.585 17:38:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:35.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:35.585 17:38:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.585 17:38:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:35.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:35.585 17:38:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:35.585 17:38:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.585 17:38:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.585 17:38:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.585 17:38:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.585 17:38:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:35.585 Found net devices under 0000:31:00.0: cvl_0_0 00:28:35.585 17:38:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.585 17:38:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.585 17:38:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.585 17:38:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.585 17:38:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.585 17:38:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:35.585 Found net devices under 0000:31:00.1: cvl_0_1 00:28:35.585 17:38:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.585 17:38:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:35.585 17:38:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:35.585 17:38:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:35.585 17:38:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.585 17:38:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.585 17:38:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.585 17:38:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:35.585 17:38:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.585 17:38:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.585 17:38:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:35.585 17:38:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.585 17:38:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.585 17:38:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:35.585 17:38:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:35.585 17:38:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.585 17:38:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.585 17:38:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.585 17:38:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.585 17:38:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:35.585 17:38:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.585 17:38:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.585 17:38:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.585 17:38:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:35.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:28:35.585 00:28:35.585 --- 10.0.0.2 ping statistics --- 00:28:35.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.585 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:28:35.585 17:38:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:35.585 00:28:35.585 --- 10.0.0.1 ping statistics --- 00:28:35.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.585 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:35.585 17:38:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.585 17:38:42 -- nvmf/common.sh@410 -- # return 0 00:28:35.585 17:38:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:35.585 17:38:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.585 17:38:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:35.585 17:38:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.585 17:38:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:35.585 17:38:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:35.585 17:38:42 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:35.585 17:38:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:35.585 17:38:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:35.585 17:38:42 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 17:38:42 -- nvmf/common.sh@469 -- # nvmfpid=3335271 00:28:35.585 17:38:42 -- nvmf/common.sh@470 -- # waitforlisten 3335271 00:28:35.585 17:38:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:35.585 17:38:42 -- common/autotest_common.sh@819 -- # '[' -z 3335271 ']' 00:28:35.585 17:38:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.585 17:38:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.585 17:38:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.585 17:38:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.585 17:38:42 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 [2024-10-13 17:38:43.007181] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:35.585 [2024-10-13 17:38:43.007245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.585 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.585 [2024-10-13 17:38:43.081240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.585 [2024-10-13 17:38:43.119484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.585 [2024-10-13 17:38:43.119622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.585 [2024-10-13 17:38:43.119631] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.585 [2024-10-13 17:38:43.119639] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.585 [2024-10-13 17:38:43.119792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.585 [2024-10-13 17:38:43.119916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.585 [2024-10-13 17:38:43.120135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.585 [2024-10-13 17:38:43.120355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.585 17:38:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:35.585 17:38:43 -- common/autotest_common.sh@852 -- # return 0 00:28:35.585 17:38:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:35.585 17:38:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:35.585 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 17:38:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.585 17:38:43 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.585 17:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.585 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 [2024-10-13 17:38:43.844453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.585 17:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.585 17:38:43 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:35.585 17:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.585 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 Malloc0 00:28:35.585 17:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.585 17:38:43 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:35.585 17:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.586 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.586 17:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.586 17:38:43 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.586 17:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.586 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.586 17:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.586 17:38:43 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.586 17:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.586 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.586 [2024-10-13 17:38:43.903830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.586 17:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.586 17:38:43 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:35.586 17:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.586 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:35.586 [2024-10-13 17:38:43.915612] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:35.586 [ 00:28:35.586 { 00:28:35.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:35.586 "subtype": "Discovery", 00:28:35.586 "listen_addresses": [], 00:28:35.586 "allow_any_host": true, 00:28:35.586 "hosts": [] 00:28:35.586 }, 00:28:35.586 { 00:28:35.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.586 "subtype": "NVMe", 00:28:35.586 "listen_addresses": [ 00:28:35.586 { 00:28:35.586 "transport": "TCP", 00:28:35.586 "trtype": "TCP", 00:28:35.586 "adrfam": "IPv4", 00:28:35.586 "traddr": "10.0.0.2", 00:28:35.586 "trsvcid": "4420" 00:28:35.586 } 00:28:35.586 ], 00:28:35.586 "allow_any_host": true, 00:28:35.586 "hosts": [], 00:28:35.586 "serial_number": "SPDK00000000000001", 00:28:35.586 "model_number": "SPDK bdev Controller", 00:28:35.586 "max_namespaces": 2, 00:28:35.586 "min_cntlid": 1, 00:28:35.586 "max_cntlid": 65519, 00:28:35.586 "namespaces": [ 00:28:35.586 { 00:28:35.586 "nsid": 1, 00:28:35.586 "bdev_name": "Malloc0", 00:28:35.586 "name": "Malloc0", 00:28:35.586 "nguid": "3BD093F0125143A0BE545DBDDB54B980", 00:28:35.586 "uuid": "3bd093f0-1251-43a0-be54-5dbddb54b980" 00:28:35.586 } 00:28:35.586 ] 00:28:35.586 } 00:28:35.586 ] 00:28:35.586 17:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.586 17:38:43 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:35.586 17:38:43 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:35.586 17:38:43 -- host/aer.sh@33 -- # aerpid=3335431 00:28:35.586 17:38:43 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:35.586 17:38:43 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:35.586 17:38:43 -- common/autotest_common.sh@1244 -- # local i=0 00:28:35.586 17:38:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.586 17:38:43 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:28:35.586 17:38:43 -- common/autotest_common.sh@1247 -- # i=1 00:28:35.586 17:38:43 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:35.586 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.586 17:38:44 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.586 17:38:44 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:28:35.586 17:38:44 -- common/autotest_common.sh@1247 -- # i=2 00:28:35.586 17:38:44 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:35.847 17:38:44 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.847 17:38:44 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:28:35.847 17:38:44 -- common/autotest_common.sh@1247 -- # i=3 00:28:35.847 17:38:44 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:35.847 17:38:44 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.847 17:38:44 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.847 17:38:44 -- common/autotest_common.sh@1255 -- # return 0 00:28:35.847 17:38:44 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:35.847 17:38:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.847 17:38:44 -- common/autotest_common.sh@10 -- # set +x 00:28:35.847 Malloc1 00:28:35.847 17:38:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.847 17:38:44 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:35.847 17:38:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.847 17:38:44 -- common/autotest_common.sh@10 -- # set +x 00:28:35.847 17:38:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.847 17:38:44 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:35.847 17:38:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.847 17:38:44 -- common/autotest_common.sh@10 -- # set +x 00:28:35.847 Asynchronous Event Request test 00:28:35.847 Attaching to 10.0.0.2 00:28:35.847 Attached to 10.0.0.2 00:28:35.847 Registering asynchronous event callbacks... 00:28:35.847 Starting namespace attribute notice tests for all controllers... 00:28:35.847 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:35.847 aer_cb - Changed Namespace 00:28:35.847 Cleaning up... 00:28:35.847 [ 00:28:35.847 { 00:28:35.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:35.847 "subtype": "Discovery", 00:28:35.847 "listen_addresses": [], 00:28:35.847 "allow_any_host": true, 00:28:35.847 "hosts": [] 00:28:35.847 }, 00:28:35.847 { 00:28:35.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.847 "subtype": "NVMe", 00:28:35.847 "listen_addresses": [ 00:28:35.847 { 00:28:35.847 "transport": "TCP", 00:28:35.847 "trtype": "TCP", 00:28:35.847 "adrfam": "IPv4", 00:28:35.847 "traddr": "10.0.0.2", 00:28:35.847 "trsvcid": "4420" 00:28:35.847 } 00:28:35.847 ], 00:28:35.847 "allow_any_host": true, 00:28:35.847 "hosts": [], 00:28:35.847 "serial_number": "SPDK00000000000001", 00:28:35.847 "model_number": "SPDK bdev Controller", 00:28:35.848 "max_namespaces": 2, 00:28:35.848 "min_cntlid": 1, 00:28:35.848 "max_cntlid": 65519, 00:28:35.848 "namespaces": [ 00:28:35.848 { 00:28:35.848 "nsid": 1, 00:28:35.848 "bdev_name": "Malloc0", 00:28:35.848 "name": "Malloc0", 00:28:35.848 "nguid": "3BD093F0125143A0BE545DBDDB54B980", 00:28:35.848 "uuid": "3bd093f0-1251-43a0-be54-5dbddb54b980" 00:28:35.848 }, 00:28:35.848 { 00:28:35.848 "nsid": 2, 00:28:35.848 "bdev_name": "Malloc1", 00:28:35.848 "name": "Malloc1", 00:28:35.848 "nguid": "29311F9A393447D89AC9D56290E00599", 00:28:35.848 "uuid": "29311f9a-3934-47d8-9ac9-d56290e00599" 00:28:35.848 } 00:28:35.848 ] 00:28:35.848 } 00:28:35.848 ] 00:28:35.848 17:38:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.848 17:38:44 -- host/aer.sh@43 -- # wait 3335431 00:28:35.848 17:38:44 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:35.848 17:38:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.848 17:38:44 -- common/autotest_common.sh@10 -- # set +x 00:28:35.848 17:38:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.848 17:38:44 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:35.848 17:38:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.848 17:38:44 -- common/autotest_common.sh@10 -- # set +x 00:28:35.848 17:38:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.848 17:38:44 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.848 17:38:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.848 17:38:44 -- common/autotest_common.sh@10 -- # set +x 00:28:35.848 17:38:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.848 17:38:44 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:35.848 17:38:44 -- host/aer.sh@51 -- # nvmftestfini 00:28:35.848 17:38:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:35.848 17:38:44 -- nvmf/common.sh@116 -- # sync 00:28:35.848 17:38:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:35.848 17:38:44 -- nvmf/common.sh@119 -- # set +e 00:28:35.848 17:38:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:35.848 17:38:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:36.109 rmmod nvme_tcp 00:28:36.109 rmmod nvme_fabrics 00:28:36.109 rmmod nvme_keyring 00:28:36.109 17:38:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:36.109 17:38:44 -- nvmf/common.sh@123 -- # set -e 00:28:36.109 17:38:44 -- nvmf/common.sh@124 -- # return 0 00:28:36.109 17:38:44 -- nvmf/common.sh@477 -- # '[' -n 3335271 ']' 00:28:36.109 17:38:44 -- nvmf/common.sh@478 -- # killprocess 3335271 00:28:36.109 17:38:44 -- common/autotest_common.sh@926 -- # '[' -z 3335271 ']' 00:28:36.109 17:38:44 -- common/autotest_common.sh@930 -- # kill -0 3335271 00:28:36.109 17:38:44 -- common/autotest_common.sh@931 -- # uname 00:28:36.109 17:38:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:36.109 17:38:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3335271 00:28:36.109 17:38:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:36.109 17:38:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:36.109 17:38:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3335271' 00:28:36.109 killing process with pid 3335271 00:28:36.109 17:38:44 -- common/autotest_common.sh@945 -- # kill 3335271 00:28:36.109 [2024-10-13 17:38:44.504963] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:36.109 17:38:44 -- common/autotest_common.sh@950 -- # wait 3335271 00:28:36.109 17:38:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:36.109 17:38:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:36.109 17:38:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:36.109 17:38:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.109 17:38:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:36.109 17:38:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.109 17:38:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.109 17:38:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.656 17:38:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:38.656 00:28:38.656 real 0m11.198s 00:28:38.656 user 0m8.092s 00:28:38.656 sys 0m5.907s 00:28:38.656 17:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.656 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 ************************************ 00:28:38.656 END TEST nvmf_aer 00:28:38.656 ************************************ 00:28:38.656 17:38:46 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:38.656 17:38:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:38.656 17:38:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:38.656 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 ************************************ 00:28:38.656 START TEST nvmf_async_init 00:28:38.656 ************************************ 00:28:38.656 17:38:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:38.656 * Looking for test storage... 00:28:38.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.656 17:38:46 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.656 17:38:46 -- nvmf/common.sh@7 -- # uname -s 00:28:38.656 17:38:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.656 17:38:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.656 17:38:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.656 17:38:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.656 17:38:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.656 17:38:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.656 17:38:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.656 17:38:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.656 17:38:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.656 17:38:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.656 17:38:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:38.656 17:38:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:38.656 17:38:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.656 17:38:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.656 17:38:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.656 17:38:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.656 17:38:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.656 17:38:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.656 17:38:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.656 17:38:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.656 17:38:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.656 17:38:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.656 17:38:46 -- paths/export.sh@5 -- # export PATH 00:28:38.656 17:38:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.656 17:38:46 -- nvmf/common.sh@46 -- # : 0 00:28:38.656 17:38:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:38.656 17:38:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:38.656 17:38:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:38.656 17:38:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.656 17:38:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.656 17:38:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:38.656 17:38:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:38.656 17:38:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:38.656 17:38:46 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:38.656 17:38:46 -- host/async_init.sh@14 -- # null_block_size=512 00:28:38.656 17:38:46 -- host/async_init.sh@15 -- # null_bdev=null0 00:28:38.656 17:38:46 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:38.656 17:38:46 -- host/async_init.sh@20 -- # uuidgen 00:28:38.656 17:38:46 -- host/async_init.sh@20 -- # tr -d - 00:28:38.656 17:38:46 -- host/async_init.sh@20 -- # nguid=03342facd88d4889a3afde9843af953e 00:28:38.656 17:38:46 -- host/async_init.sh@22 -- # nvmftestinit 00:28:38.656 17:38:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:38.656 17:38:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.656 17:38:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:38.656 17:38:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:38.656 17:38:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:38.656 17:38:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.656 17:38:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.656 17:38:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.656 17:38:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:38.656 17:38:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:38.656 17:38:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:38.656 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:28:46.807 17:38:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:46.807 17:38:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:46.807 17:38:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:46.807 17:38:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:46.807 17:38:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:46.807 17:38:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:46.807 17:38:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:46.807 17:38:53 -- nvmf/common.sh@294 -- # net_devs=() 00:28:46.807 17:38:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:46.807 17:38:53 -- nvmf/common.sh@295 -- # e810=() 00:28:46.807 17:38:53 -- nvmf/common.sh@295 -- # local -ga e810 00:28:46.807 17:38:53 -- nvmf/common.sh@296 -- # x722=() 00:28:46.807 17:38:53 -- nvmf/common.sh@296 -- # local -ga x722 00:28:46.807 17:38:53 -- nvmf/common.sh@297 -- # mlx=() 00:28:46.807 17:38:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:46.807 17:38:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.807 17:38:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:46.807 17:38:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:46.807 17:38:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:46.807 17:38:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:46.807 17:38:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:46.807 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:46.807 17:38:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:46.807 17:38:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:46.807 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:46.807 17:38:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:46.807 17:38:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:46.807 17:38:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.807 17:38:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:46.807 17:38:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.807 17:38:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:46.807 Found net devices under 0000:31:00.0: cvl_0_0 00:28:46.807 17:38:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.807 17:38:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:46.807 17:38:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.807 17:38:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:46.807 17:38:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.807 17:38:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:46.807 Found net devices under 0000:31:00.1: cvl_0_1 00:28:46.807 17:38:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.807 17:38:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:46.807 17:38:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:46.807 17:38:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:46.807 17:38:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:46.807 17:38:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.807 17:38:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.807 17:38:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.807 17:38:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:46.807 17:38:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.807 17:38:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.807 17:38:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:46.807 17:38:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.807 17:38:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.807 17:38:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:46.807 17:38:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:46.807 17:38:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.807 17:38:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.807 17:38:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.807 17:38:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.807 17:38:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:46.807 17:38:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.807 17:38:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.807 17:38:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.807 17:38:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:46.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:28:46.807 00:28:46.807 --- 10.0.0.2 ping statistics --- 00:28:46.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.807 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:28:46.807 17:38:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:28:46.807 00:28:46.807 --- 10.0.0.1 ping statistics --- 00:28:46.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.807 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:28:46.807 17:38:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.807 17:38:54 -- nvmf/common.sh@410 -- # return 0 00:28:46.807 17:38:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:46.807 17:38:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.807 17:38:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:46.807 17:38:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:46.807 17:38:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.807 17:38:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:46.807 17:38:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:46.807 17:38:54 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:46.807 17:38:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:46.807 17:38:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:46.807 17:38:54 -- common/autotest_common.sh@10 -- # set +x 00:28:46.807 17:38:54 -- nvmf/common.sh@469 -- # nvmfpid=3339830 00:28:46.807 17:38:54 -- nvmf/common.sh@470 -- # waitforlisten 3339830 00:28:46.807 17:38:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:46.807 17:38:54 -- common/autotest_common.sh@819 -- # '[' -z 3339830 ']' 00:28:46.807 17:38:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.807 17:38:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:46.807 17:38:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.807 17:38:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:46.807 17:38:54 -- common/autotest_common.sh@10 -- # set +x 00:28:46.807 [2024-10-13 17:38:54.387133] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:46.807 [2024-10-13 17:38:54.387196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.807 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.807 [2024-10-13 17:38:54.460172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.807 [2024-10-13 17:38:54.496654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:46.807 [2024-10-13 17:38:54.496787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.807 [2024-10-13 17:38:54.496796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.807 [2024-10-13 17:38:54.496804] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.807 [2024-10-13 17:38:54.496825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.807 17:38:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:46.807 17:38:55 -- common/autotest_common.sh@852 -- # return 0 00:28:46.807 17:38:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:46.807 17:38:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:46.807 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.807 17:38:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.807 17:38:55 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:46.807 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.807 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.807 [2024-10-13 17:38:55.207324] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.807 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.807 17:38:55 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:46.807 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.808 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.808 null0 00:28:46.808 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.808 17:38:55 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:46.808 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.808 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.808 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.808 17:38:55 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:46.808 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.808 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.808 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.808 17:38:55 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 03342facd88d4889a3afde9843af953e 00:28:46.808 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.808 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.808 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.808 17:38:55 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.808 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.808 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:46.808 [2024-10-13 17:38:55.263610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.808 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.808 17:38:55 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:46.808 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.808 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.068 nvme0n1 00:28:47.068 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.068 17:38:55 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:47.068 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.068 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.068 [ 00:28:47.068 { 00:28:47.068 "name": "nvme0n1", 00:28:47.068 "aliases": [ 00:28:47.068 "03342fac-d88d-4889-a3af-de9843af953e" 00:28:47.068 ], 00:28:47.068 "product_name": "NVMe disk", 00:28:47.068 "block_size": 512, 00:28:47.068 "num_blocks": 2097152, 00:28:47.068 "uuid": "03342fac-d88d-4889-a3af-de9843af953e", 00:28:47.068 "assigned_rate_limits": { 00:28:47.068 "rw_ios_per_sec": 0, 00:28:47.068 "rw_mbytes_per_sec": 0, 00:28:47.068 "r_mbytes_per_sec": 0, 00:28:47.068 "w_mbytes_per_sec": 0 00:28:47.068 }, 00:28:47.068 "claimed": false, 00:28:47.068 "zoned": false, 00:28:47.068 "supported_io_types": { 00:28:47.068 "read": true, 00:28:47.068 "write": true, 00:28:47.068 "unmap": false, 00:28:47.068 "write_zeroes": true, 00:28:47.068 "flush": true, 00:28:47.068 "reset": true, 00:28:47.068 "compare": true, 00:28:47.068 "compare_and_write": true, 00:28:47.068 "abort": true, 00:28:47.068 "nvme_admin": true, 00:28:47.068 "nvme_io": true 00:28:47.068 }, 00:28:47.068 "driver_specific": { 00:28:47.068 "nvme": [ 00:28:47.068 { 00:28:47.068 "trid": { 00:28:47.068 "trtype": "TCP", 00:28:47.068 "adrfam": "IPv4", 00:28:47.068 "traddr": "10.0.0.2", 00:28:47.068 "trsvcid": "4420", 00:28:47.068 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:47.068 }, 00:28:47.068 "ctrlr_data": { 00:28:47.068 "cntlid": 1, 00:28:47.068 "vendor_id": "0x8086", 00:28:47.068 "model_number": "SPDK bdev Controller", 00:28:47.068 "serial_number": "00000000000000000000", 00:28:47.068 "firmware_revision": "24.01.1", 00:28:47.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:47.068 "oacs": { 00:28:47.068 "security": 0, 00:28:47.068 "format": 0, 00:28:47.068 "firmware": 0, 00:28:47.068 "ns_manage": 0 00:28:47.068 }, 00:28:47.068 "multi_ctrlr": true, 00:28:47.068 "ana_reporting": false 00:28:47.068 }, 00:28:47.068 "vs": { 00:28:47.068 "nvme_version": "1.3" 00:28:47.068 }, 00:28:47.068 "ns_data": { 00:28:47.068 "id": 1, 00:28:47.068 "can_share": true 00:28:47.068 } 00:28:47.068 } 00:28:47.068 ], 00:28:47.068 "mp_policy": "active_passive" 00:28:47.068 } 00:28:47.068 } 00:28:47.068 ] 00:28:47.068 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.068 17:38:55 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:47.068 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.068 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.068 [2024-10-13 17:38:55.528139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:47.068 [2024-10-13 17:38:55.528200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193ec30 (9): Bad file descriptor 00:28:47.329 [2024-10-13 17:38:55.670160] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:47.329 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.329 17:38:55 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:47.329 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.329 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.329 [ 00:28:47.329 { 00:28:47.329 "name": "nvme0n1", 00:28:47.329 "aliases": [ 00:28:47.329 "03342fac-d88d-4889-a3af-de9843af953e" 00:28:47.329 ], 00:28:47.329 "product_name": "NVMe disk", 00:28:47.329 "block_size": 512, 00:28:47.329 "num_blocks": 2097152, 00:28:47.329 "uuid": "03342fac-d88d-4889-a3af-de9843af953e", 00:28:47.329 "assigned_rate_limits": { 00:28:47.329 "rw_ios_per_sec": 0, 00:28:47.329 "rw_mbytes_per_sec": 0, 00:28:47.329 "r_mbytes_per_sec": 0, 00:28:47.329 "w_mbytes_per_sec": 0 00:28:47.329 }, 00:28:47.329 "claimed": false, 00:28:47.329 "zoned": false, 00:28:47.329 "supported_io_types": { 00:28:47.329 "read": true, 00:28:47.329 "write": true, 00:28:47.329 "unmap": false, 00:28:47.329 "write_zeroes": true, 00:28:47.329 "flush": true, 00:28:47.329 "reset": true, 00:28:47.329 "compare": true, 00:28:47.329 "compare_and_write": true, 00:28:47.329 "abort": true, 00:28:47.329 "nvme_admin": true, 00:28:47.329 "nvme_io": true 00:28:47.329 }, 00:28:47.329 "driver_specific": { 00:28:47.329 "nvme": [ 00:28:47.329 { 00:28:47.329 "trid": { 00:28:47.329 "trtype": "TCP", 00:28:47.329 "adrfam": "IPv4", 00:28:47.329 "traddr": "10.0.0.2", 00:28:47.329 "trsvcid": "4420", 00:28:47.329 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:47.329 }, 00:28:47.329 "ctrlr_data": { 00:28:47.329 "cntlid": 2, 00:28:47.329 "vendor_id": "0x8086", 00:28:47.329 "model_number": "SPDK bdev Controller", 00:28:47.329 "serial_number": "00000000000000000000", 00:28:47.329 "firmware_revision": "24.01.1", 00:28:47.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:47.329 "oacs": { 00:28:47.329 "security": 0, 00:28:47.329 "format": 0, 00:28:47.329 "firmware": 0, 00:28:47.329 "ns_manage": 0 00:28:47.329 }, 00:28:47.329 "multi_ctrlr": true, 00:28:47.329 "ana_reporting": false 00:28:47.329 }, 00:28:47.329 "vs": { 00:28:47.329 "nvme_version": "1.3" 00:28:47.329 }, 00:28:47.329 "ns_data": { 00:28:47.329 "id": 1, 00:28:47.329 "can_share": true 00:28:47.329 } 00:28:47.329 } 00:28:47.329 ], 00:28:47.329 "mp_policy": "active_passive" 00:28:47.329 } 00:28:47.329 } 00:28:47.329 ] 00:28:47.329 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.329 17:38:55 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.329 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.329 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.329 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.329 17:38:55 -- host/async_init.sh@53 -- # mktemp 00:28:47.329 17:38:55 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pz26hf9Ea5 00:28:47.329 17:38:55 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:47.329 17:38:55 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pz26hf9Ea5 00:28:47.329 17:38:55 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:47.329 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.329 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.329 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.329 17:38:55 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:47.329 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.329 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.329 [2024-10-13 17:38:55.740789] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:47.330 [2024-10-13 17:38:55.740931] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:47.330 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.330 17:38:55 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pz26hf9Ea5 00:28:47.330 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.330 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.330 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.330 17:38:55 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pz26hf9Ea5 00:28:47.330 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.330 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.330 [2024-10-13 17:38:55.764850] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:47.330 nvme0n1 00:28:47.330 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.330 17:38:55 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:47.330 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.330 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.330 [ 00:28:47.330 { 00:28:47.330 "name": "nvme0n1", 00:28:47.330 "aliases": [ 00:28:47.330 "03342fac-d88d-4889-a3af-de9843af953e" 00:28:47.330 ], 00:28:47.330 "product_name": "NVMe disk", 00:28:47.330 "block_size": 512, 00:28:47.330 "num_blocks": 2097152, 00:28:47.330 "uuid": "03342fac-d88d-4889-a3af-de9843af953e", 00:28:47.330 "assigned_rate_limits": { 00:28:47.330 "rw_ios_per_sec": 0, 00:28:47.330 "rw_mbytes_per_sec": 0, 00:28:47.330 "r_mbytes_per_sec": 0, 00:28:47.330 "w_mbytes_per_sec": 0 00:28:47.330 }, 00:28:47.330 "claimed": false, 00:28:47.330 "zoned": false, 00:28:47.330 "supported_io_types": { 00:28:47.330 "read": true, 00:28:47.330 "write": true, 00:28:47.330 "unmap": false, 00:28:47.330 "write_zeroes": true, 00:28:47.330 "flush": true, 00:28:47.330 "reset": true, 00:28:47.330 "compare": true, 00:28:47.330 "compare_and_write": true, 00:28:47.330 "abort": true, 00:28:47.330 "nvme_admin": true, 00:28:47.330 "nvme_io": true 00:28:47.330 }, 00:28:47.330 "driver_specific": { 00:28:47.330 "nvme": [ 00:28:47.330 { 00:28:47.330 "trid": { 00:28:47.330 "trtype": "TCP", 00:28:47.330 "adrfam": "IPv4", 00:28:47.330 "traddr": "10.0.0.2", 00:28:47.330 "trsvcid": "4421", 00:28:47.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:47.591 }, 00:28:47.591 "ctrlr_data": { 00:28:47.591 "cntlid": 3, 00:28:47.591 "vendor_id": "0x8086", 00:28:47.591 "model_number": "SPDK bdev Controller", 00:28:47.591 "serial_number": "00000000000000000000", 00:28:47.591 "firmware_revision": "24.01.1", 00:28:47.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:47.591 "oacs": { 00:28:47.591 "security": 0, 00:28:47.591 "format": 0, 00:28:47.591 "firmware": 0, 00:28:47.591 "ns_manage": 0 00:28:47.591 }, 00:28:47.591 "multi_ctrlr": true, 00:28:47.591 "ana_reporting": false 00:28:47.591 }, 00:28:47.591 "vs": { 00:28:47.591 "nvme_version": "1.3" 00:28:47.591 }, 00:28:47.591 "ns_data": { 00:28:47.591 "id": 1, 00:28:47.591 "can_share": true 00:28:47.591 } 00:28:47.591 } 00:28:47.591 ], 00:28:47.591 "mp_policy": "active_passive" 00:28:47.591 } 00:28:47.591 } 00:28:47.591 ] 00:28:47.591 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.591 17:38:55 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.591 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.591 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:47.591 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.591 17:38:55 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pz26hf9Ea5 00:28:47.591 17:38:55 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:47.591 17:38:55 -- host/async_init.sh@78 -- # nvmftestfini 00:28:47.591 17:38:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:47.591 17:38:55 -- nvmf/common.sh@116 -- # sync 00:28:47.591 17:38:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:47.591 17:38:55 -- nvmf/common.sh@119 -- # set +e 00:28:47.591 17:38:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:47.591 17:38:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:47.591 rmmod nvme_tcp 00:28:47.591 rmmod nvme_fabrics 00:28:47.591 rmmod nvme_keyring 00:28:47.591 17:38:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:47.591 17:38:55 -- nvmf/common.sh@123 -- # set -e 00:28:47.591 17:38:55 -- nvmf/common.sh@124 -- # return 0 00:28:47.591 17:38:55 -- nvmf/common.sh@477 -- # '[' -n 3339830 ']' 00:28:47.591 17:38:55 -- nvmf/common.sh@478 -- # killprocess 3339830 00:28:47.591 17:38:55 -- common/autotest_common.sh@926 -- # '[' -z 3339830 ']' 00:28:47.591 17:38:55 -- common/autotest_common.sh@930 -- # kill -0 3339830 00:28:47.591 17:38:55 -- common/autotest_common.sh@931 -- # uname 00:28:47.591 17:38:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:47.591 17:38:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3339830 00:28:47.591 17:38:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:47.591 17:38:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:47.591 17:38:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3339830' 00:28:47.591 killing process with pid 3339830 00:28:47.591 17:38:56 -- common/autotest_common.sh@945 -- # kill 3339830 00:28:47.591 17:38:56 -- common/autotest_common.sh@950 -- # wait 3339830 00:28:47.852 17:38:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:47.852 17:38:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:47.852 17:38:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:47.852 17:38:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:47.852 17:38:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:47.852 17:38:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.852 17:38:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.852 17:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.766 17:38:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:49.766 00:28:49.766 real 0m11.457s 00:28:49.766 user 0m4.018s 00:28:49.766 sys 0m5.902s 00:28:49.766 17:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.766 17:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:49.766 ************************************ 00:28:49.766 END TEST nvmf_async_init 00:28:49.766 ************************************ 00:28:49.766 17:38:58 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:49.766 17:38:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:49.766 17:38:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.766 17:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:49.766 ************************************ 00:28:49.766 START TEST dma 00:28:49.766 ************************************ 00:28:49.766 17:38:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:50.027 * Looking for test storage... 00:28:50.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.027 17:38:58 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.027 17:38:58 -- nvmf/common.sh@7 -- # uname -s 00:28:50.027 17:38:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.027 17:38:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.027 17:38:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.027 17:38:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.027 17:38:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.027 17:38:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.027 17:38:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.027 17:38:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.027 17:38:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.027 17:38:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.027 17:38:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:50.027 17:38:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:50.027 17:38:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.027 17:38:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.027 17:38:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.027 17:38:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.027 17:38:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.027 17:38:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.027 17:38:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.027 17:38:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.027 17:38:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.027 17:38:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.027 17:38:58 -- paths/export.sh@5 -- # export PATH 00:28:50.027 17:38:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.027 17:38:58 -- nvmf/common.sh@46 -- # : 0 00:28:50.028 17:38:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:50.028 17:38:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:50.028 17:38:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.028 17:38:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.028 17:38:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:50.028 17:38:58 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:50.028 17:38:58 -- host/dma.sh@13 -- # exit 0 00:28:50.028 00:28:50.028 real 0m0.128s 00:28:50.028 user 0m0.059s 00:28:50.028 sys 0m0.078s 00:28:50.028 17:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.028 17:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:50.028 ************************************ 00:28:50.028 END TEST dma 00:28:50.028 ************************************ 00:28:50.028 17:38:58 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:50.028 17:38:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:50.028 17:38:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.028 17:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:50.028 ************************************ 00:28:50.028 START TEST nvmf_identify 00:28:50.028 ************************************ 00:28:50.028 17:38:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:50.028 * Looking for test storage... 00:28:50.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.028 17:38:58 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.028 17:38:58 -- nvmf/common.sh@7 -- # uname -s 00:28:50.028 17:38:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.028 17:38:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.028 17:38:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.028 17:38:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.028 17:38:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.028 17:38:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.028 17:38:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.028 17:38:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.028 17:38:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.028 17:38:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.028 17:38:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:50.028 17:38:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:50.028 17:38:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.028 17:38:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.028 17:38:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.028 17:38:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.028 17:38:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.028 17:38:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.028 17:38:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.028 17:38:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.028 17:38:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.028 17:38:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.028 17:38:58 -- paths/export.sh@5 -- # export PATH 00:28:50.028 17:38:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.028 17:38:58 -- nvmf/common.sh@46 -- # : 0 00:28:50.028 17:38:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:50.028 17:38:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:50.028 17:38:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.028 17:38:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.028 17:38:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:50.028 17:38:58 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.028 17:38:58 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.028 17:38:58 -- host/identify.sh@14 -- # nvmftestinit 00:28:50.028 17:38:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:50.028 17:38:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.028 17:38:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:50.028 17:38:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:50.028 17:38:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:50.028 17:38:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.028 17:38:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.028 17:38:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.290 17:38:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:50.290 17:38:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:50.290 17:38:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:50.290 17:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:58.429 17:39:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:58.429 17:39:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:58.429 17:39:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:58.429 17:39:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:58.429 17:39:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:58.429 17:39:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:58.429 17:39:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:58.429 17:39:05 -- nvmf/common.sh@294 -- # net_devs=() 00:28:58.429 17:39:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:58.429 17:39:05 -- nvmf/common.sh@295 -- # e810=() 00:28:58.429 17:39:05 -- nvmf/common.sh@295 -- # local -ga e810 00:28:58.429 17:39:05 -- nvmf/common.sh@296 -- # x722=() 00:28:58.429 17:39:05 -- nvmf/common.sh@296 -- # local -ga x722 00:28:58.429 17:39:05 -- nvmf/common.sh@297 -- # mlx=() 00:28:58.429 17:39:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:58.429 17:39:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.429 17:39:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:58.430 17:39:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:58.430 17:39:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:58.430 17:39:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:58.430 17:39:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:58.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:58.430 17:39:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:58.430 17:39:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:58.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:58.430 17:39:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:58.430 17:39:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:58.430 17:39:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.430 17:39:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:58.430 17:39:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.430 17:39:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:58.430 Found net devices under 0000:31:00.0: cvl_0_0 00:28:58.430 17:39:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.430 17:39:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:58.430 17:39:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.430 17:39:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:58.430 17:39:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.430 17:39:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:58.430 Found net devices under 0000:31:00.1: cvl_0_1 00:28:58.430 17:39:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.430 17:39:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:58.430 17:39:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:58.430 17:39:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:58.430 17:39:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.430 17:39:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.430 17:39:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.430 17:39:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:58.430 17:39:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.430 17:39:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.430 17:39:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:58.430 17:39:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.430 17:39:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.430 17:39:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:58.430 17:39:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:58.430 17:39:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.430 17:39:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.430 17:39:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.430 17:39:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.430 17:39:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:58.430 17:39:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.430 17:39:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.430 17:39:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.430 17:39:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:58.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:28:58.430 00:28:58.430 --- 10.0.0.2 ping statistics --- 00:28:58.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.430 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:28:58.430 17:39:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:28:58.430 00:28:58.430 --- 10.0.0.1 ping statistics --- 00:28:58.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.430 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:28:58.430 17:39:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.430 17:39:05 -- nvmf/common.sh@410 -- # return 0 00:28:58.430 17:39:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:58.430 17:39:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.430 17:39:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:58.430 17:39:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.430 17:39:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:58.430 17:39:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:58.430 17:39:05 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:58.430 17:39:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:58.430 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 17:39:05 -- host/identify.sh@19 -- # nvmfpid=3344421 00:28:58.430 17:39:05 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.430 17:39:05 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:58.430 17:39:05 -- host/identify.sh@23 -- # waitforlisten 3344421 00:28:58.430 17:39:05 -- common/autotest_common.sh@819 -- # '[' -z 3344421 ']' 00:28:58.430 17:39:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.430 17:39:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:58.430 17:39:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.430 17:39:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:58.430 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 [2024-10-13 17:39:05.972010] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:58.430 [2024-10-13 17:39:05.972110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.430 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.430 [2024-10-13 17:39:06.054623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.430 [2024-10-13 17:39:06.093493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:58.430 [2024-10-13 17:39:06.093651] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.430 [2024-10-13 17:39:06.093662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.430 [2024-10-13 17:39:06.093672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.430 [2024-10-13 17:39:06.093839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.430 [2024-10-13 17:39:06.093951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.430 [2024-10-13 17:39:06.094113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.430 [2024-10-13 17:39:06.094113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.430 17:39:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:58.430 17:39:06 -- common/autotest_common.sh@852 -- # return 0 00:28:58.430 17:39:06 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 [2024-10-13 17:39:06.767240] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.430 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.430 17:39:06 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:58.430 17:39:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 17:39:06 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 Malloc0 00:28:58.430 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.430 17:39:06 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.430 17:39:06 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.430 17:39:06 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 [2024-10-13 17:39:06.862647] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.430 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.430 17:39:06 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.430 17:39:06 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:58.430 17:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.430 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:28:58.430 [2024-10-13 17:39:06.882466] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:58.430 [ 00:28:58.430 { 00:28:58.430 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:58.430 "subtype": "Discovery", 00:28:58.430 "listen_addresses": [ 00:28:58.430 { 00:28:58.430 "transport": "TCP", 00:28:58.430 "trtype": "TCP", 00:28:58.430 "adrfam": "IPv4", 00:28:58.430 "traddr": "10.0.0.2", 00:28:58.430 "trsvcid": "4420" 00:28:58.430 } 00:28:58.430 ], 00:28:58.430 "allow_any_host": true, 00:28:58.430 "hosts": [] 00:28:58.431 }, 00:28:58.431 { 00:28:58.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.431 "subtype": "NVMe", 00:28:58.431 "listen_addresses": [ 00:28:58.431 { 00:28:58.431 "transport": "TCP", 00:28:58.431 "trtype": "TCP", 00:28:58.431 "adrfam": "IPv4", 00:28:58.431 "traddr": "10.0.0.2", 00:28:58.431 "trsvcid": "4420" 00:28:58.431 } 00:28:58.431 ], 00:28:58.431 "allow_any_host": true, 00:28:58.431 "hosts": [], 00:28:58.431 "serial_number": "SPDK00000000000001", 00:28:58.431 "model_number": "SPDK bdev Controller", 00:28:58.431 "max_namespaces": 32, 00:28:58.431 "min_cntlid": 1, 00:28:58.431 "max_cntlid": 65519, 00:28:58.431 "namespaces": [ 00:28:58.431 { 00:28:58.431 "nsid": 1, 00:28:58.431 "bdev_name": "Malloc0", 00:28:58.431 "name": "Malloc0", 00:28:58.431 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:58.431 "eui64": "ABCDEF0123456789", 00:28:58.431 "uuid": "a83f9f93-703e-4195-8f00-3f09ca171fbb" 00:28:58.431 } 00:28:58.431 ] 00:28:58.431 } 00:28:58.431 ] 00:28:58.431 17:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.431 17:39:06 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:58.431 [2024-10-13 17:39:06.919535] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:58.431 [2024-10-13 17:39:06.919585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344768 ] 00:28:58.431 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.431 [2024-10-13 17:39:06.952712] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:58.431 [2024-10-13 17:39:06.952752] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:58.431 [2024-10-13 17:39:06.952758] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:58.431 [2024-10-13 17:39:06.952769] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:58.431 [2024-10-13 17:39:06.952777] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:58.693 [2024-10-13 17:39:06.956100] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:58.693 [2024-10-13 17:39:06.956135] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a6c340 0 00:28:58.693 [2024-10-13 17:39:06.964072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:58.693 [2024-10-13 17:39:06.964082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:58.693 [2024-10-13 17:39:06.964087] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:58.693 [2024-10-13 17:39:06.964090] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:58.693 [2024-10-13 17:39:06.964124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.693 [2024-10-13 17:39:06.964130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.693 [2024-10-13 17:39:06.964134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.693 [2024-10-13 17:39:06.964147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:58.693 [2024-10-13 17:39:06.964162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.693 [2024-10-13 17:39:06.971071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.693 [2024-10-13 17:39:06.971081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.693 [2024-10-13 17:39:06.971085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.693 [2024-10-13 17:39:06.971089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.693 [2024-10-13 17:39:06.971102] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:58.693 [2024-10-13 17:39:06.971108] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:58.694 [2024-10-13 17:39:06.971114] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:58.694 [2024-10-13 17:39:06.971125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.971141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.971154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.971356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:06.971363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:06.971366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:06.971380] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:58.694 [2024-10-13 17:39:06.971387] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:58.694 [2024-10-13 17:39:06.971394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971402] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.971408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.971419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.971533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:06.971539] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:06.971543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:06.971552] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:58.694 [2024-10-13 17:39:06.971561] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:58.694 [2024-10-13 17:39:06.971567] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.971581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.971591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.971792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:06.971798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:06.971802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971806] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:06.971812] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:58.694 [2024-10-13 17:39:06.971821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971828] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.971835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.971845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.971979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:06.971985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:06.971989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.971993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:06.971998] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:58.694 [2024-10-13 17:39:06.972005] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:58.694 [2024-10-13 17:39:06.972013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:58.694 [2024-10-13 17:39:06.972118] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:58.694 [2024-10-13 17:39:06.972123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:58.694 [2024-10-13 17:39:06.972131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.972145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.972156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.972251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:06.972257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:06.972261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972265] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:06.972270] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:58.694 [2024-10-13 17:39:06.972279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.972294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.972304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.972510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:06.972517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:06.972520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:06.972529] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:58.694 [2024-10-13 17:39:06.972534] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:58.694 [2024-10-13 17:39:06.972541] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:58.694 [2024-10-13 17:39:06.972549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:58.694 [2024-10-13 17:39:06.972556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:06.972571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.694 [2024-10-13 17:39:06.972583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:06.972752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.694 [2024-10-13 17:39:06.972759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.694 [2024-10-13 17:39:06.972762] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972767] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6c340): datao=0, datal=4096, cccid=0 00:28:58.694 [2024-10-13 17:39:06.972771] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ad2970) on tqpair(0x1a6c340): expected_datao=0, payload_size=4096 00:28:58.694 [2024-10-13 17:39:06.972786] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:06.972791] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:07.016070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:07.016080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:07.016084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:07.016088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:07.016096] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:58.694 [2024-10-13 17:39:07.016101] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:58.694 [2024-10-13 17:39:07.016106] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:58.694 [2024-10-13 17:39:07.016111] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:58.694 [2024-10-13 17:39:07.016115] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:58.694 [2024-10-13 17:39:07.016120] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:58.694 [2024-10-13 17:39:07.016132] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:58.694 [2024-10-13 17:39:07.016139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:07.016143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:07.016147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.694 [2024-10-13 17:39:07.016154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:58.694 [2024-10-13 17:39:07.016167] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.694 [2024-10-13 17:39:07.016346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.694 [2024-10-13 17:39:07.016353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.694 [2024-10-13 17:39:07.016356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:07.016360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2970) on tqpair=0x1a6c340 00:28:58.694 [2024-10-13 17:39:07.016368] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.694 [2024-10-13 17:39:07.016371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016375] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.016381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.695 [2024-10-13 17:39:07.016387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.016403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.695 [2024-10-13 17:39:07.016409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.016421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.695 [2024-10-13 17:39:07.016427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.016440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.695 [2024-10-13 17:39:07.016445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:58.695 [2024-10-13 17:39:07.016455] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:58.695 [2024-10-13 17:39:07.016462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.016476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.695 [2024-10-13 17:39:07.016487] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2970, cid 0, qid 0 00:28:58.695 [2024-10-13 17:39:07.016492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2ad0, cid 1, qid 0 00:28:58.695 [2024-10-13 17:39:07.016497] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2c30, cid 2, qid 0 00:28:58.695 [2024-10-13 17:39:07.016502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.695 [2024-10-13 17:39:07.016506] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2ef0, cid 4, qid 0 00:28:58.695 [2024-10-13 17:39:07.016639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.695 [2024-10-13 17:39:07.016645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.695 [2024-10-13 17:39:07.016648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2ef0) on tqpair=0x1a6c340 00:28:58.695 [2024-10-13 17:39:07.016658] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:58.695 [2024-10-13 17:39:07.016663] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:58.695 [2024-10-13 17:39:07.016672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.016686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.695 [2024-10-13 17:39:07.016696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2ef0, cid 4, qid 0 00:28:58.695 [2024-10-13 17:39:07.016823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.695 [2024-10-13 17:39:07.016830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.695 [2024-10-13 17:39:07.016833] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016837] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6c340): datao=0, datal=4096, cccid=4 00:28:58.695 [2024-10-13 17:39:07.016841] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ad2ef0) on tqpair(0x1a6c340): expected_datao=0, payload_size=4096 00:28:58.695 [2024-10-13 17:39:07.016858] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.016863] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.017005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.695 [2024-10-13 17:39:07.017011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.695 [2024-10-13 17:39:07.017015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.017019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2ef0) on tqpair=0x1a6c340 00:28:58.695 [2024-10-13 17:39:07.017030] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:58.695 [2024-10-13 17:39:07.017050] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.017054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.017058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.021069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.695 [2024-10-13 17:39:07.021077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.021081] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.021085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.021091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.695 [2024-10-13 17:39:07.021106] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2ef0, cid 4, qid 0 00:28:58.695 [2024-10-13 17:39:07.021111] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad3050, cid 5, qid 0 00:28:58.695 [2024-10-13 17:39:07.021343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.695 [2024-10-13 17:39:07.021350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.695 [2024-10-13 17:39:07.021353] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.021357] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6c340): datao=0, datal=1024, cccid=4 00:28:58.695 [2024-10-13 17:39:07.021362] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ad2ef0) on tqpair(0x1a6c340): expected_datao=0, payload_size=1024 00:28:58.695 [2024-10-13 17:39:07.021369] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.021373] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.021378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.695 [2024-10-13 17:39:07.021384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.695 [2024-10-13 17:39:07.021387] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.021391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad3050) on tqpair=0x1a6c340 00:28:58.695 [2024-10-13 17:39:07.062232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.695 [2024-10-13 17:39:07.062242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.695 [2024-10-13 17:39:07.062246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.062249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2ef0) on tqpair=0x1a6c340 00:28:58.695 [2024-10-13 17:39:07.062264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.062267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.062271] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.062278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.695 [2024-10-13 17:39:07.062292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2ef0, cid 4, qid 0 00:28:58.695 [2024-10-13 17:39:07.062539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.695 [2024-10-13 17:39:07.062546] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.695 [2024-10-13 17:39:07.062549] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.062553] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6c340): datao=0, datal=3072, cccid=4 00:28:58.695 [2024-10-13 17:39:07.062558] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ad2ef0) on tqpair(0x1a6c340): expected_datao=0, payload_size=3072 00:28:58.695 [2024-10-13 17:39:07.062574] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.062578] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.107073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.695 [2024-10-13 17:39:07.107084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.695 [2024-10-13 17:39:07.107087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.107091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2ef0) on tqpair=0x1a6c340 00:28:58.695 [2024-10-13 17:39:07.107100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.107104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.107108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6c340) 00:28:58.695 [2024-10-13 17:39:07.107115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.695 [2024-10-13 17:39:07.107129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2ef0, cid 4, qid 0 00:28:58.695 [2024-10-13 17:39:07.107329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.695 [2024-10-13 17:39:07.107335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.695 [2024-10-13 17:39:07.107339] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.107342] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6c340): datao=0, datal=8, cccid=4 00:28:58.695 [2024-10-13 17:39:07.107347] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ad2ef0) on tqpair(0x1a6c340): expected_datao=0, payload_size=8 00:28:58.695 [2024-10-13 17:39:07.107354] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.107358] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.148247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.695 [2024-10-13 17:39:07.148255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.695 [2024-10-13 17:39:07.148259] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.695 [2024-10-13 17:39:07.148263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2ef0) on tqpair=0x1a6c340 00:28:58.695 ===================================================== 00:28:58.695 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:58.695 ===================================================== 00:28:58.695 Controller Capabilities/Features 00:28:58.695 ================================ 00:28:58.695 Vendor ID: 0000 00:28:58.695 Subsystem Vendor ID: 0000 00:28:58.695 Serial Number: .................... 00:28:58.695 Model Number: ........................................ 00:28:58.696 Firmware Version: 24.01.1 00:28:58.696 Recommended Arb Burst: 0 00:28:58.696 IEEE OUI Identifier: 00 00 00 00:28:58.696 Multi-path I/O 00:28:58.696 May have multiple subsystem ports: No 00:28:58.696 May have multiple controllers: No 00:28:58.696 Associated with SR-IOV VF: No 00:28:58.696 Max Data Transfer Size: 131072 00:28:58.696 Max Number of Namespaces: 0 00:28:58.696 Max Number of I/O Queues: 1024 00:28:58.696 NVMe Specification Version (VS): 1.3 00:28:58.696 NVMe Specification Version (Identify): 1.3 00:28:58.696 Maximum Queue Entries: 128 00:28:58.696 Contiguous Queues Required: Yes 00:28:58.696 Arbitration Mechanisms Supported 00:28:58.696 Weighted Round Robin: Not Supported 00:28:58.696 Vendor Specific: Not Supported 00:28:58.696 Reset Timeout: 15000 ms 00:28:58.696 Doorbell Stride: 4 bytes 00:28:58.696 NVM Subsystem Reset: Not Supported 00:28:58.696 Command Sets Supported 00:28:58.696 NVM Command Set: Supported 00:28:58.696 Boot Partition: Not Supported 00:28:58.696 Memory Page Size Minimum: 4096 bytes 00:28:58.696 Memory Page Size Maximum: 4096 bytes 00:28:58.696 Persistent Memory Region: Not Supported 00:28:58.696 Optional Asynchronous Events Supported 00:28:58.696 Namespace Attribute Notices: Not Supported 00:28:58.696 Firmware Activation Notices: Not Supported 00:28:58.696 ANA Change Notices: Not Supported 00:28:58.696 PLE Aggregate Log Change Notices: Not Supported 00:28:58.696 LBA Status Info Alert Notices: Not Supported 00:28:58.696 EGE Aggregate Log Change Notices: Not Supported 00:28:58.696 Normal NVM Subsystem Shutdown event: Not Supported 00:28:58.696 Zone Descriptor Change Notices: Not Supported 00:28:58.696 Discovery Log Change Notices: Supported 00:28:58.696 Controller Attributes 00:28:58.696 128-bit Host Identifier: Not Supported 00:28:58.696 Non-Operational Permissive Mode: Not Supported 00:28:58.696 NVM Sets: Not Supported 00:28:58.696 Read Recovery Levels: Not Supported 00:28:58.696 Endurance Groups: Not Supported 00:28:58.696 Predictable Latency Mode: Not Supported 00:28:58.696 Traffic Based Keep ALive: Not Supported 00:28:58.696 Namespace Granularity: Not Supported 00:28:58.696 SQ Associations: Not Supported 00:28:58.696 UUID List: Not Supported 00:28:58.696 Multi-Domain Subsystem: Not Supported 00:28:58.696 Fixed Capacity Management: Not Supported 00:28:58.696 Variable Capacity Management: Not Supported 00:28:58.696 Delete Endurance Group: Not Supported 00:28:58.696 Delete NVM Set: Not Supported 00:28:58.696 Extended LBA Formats Supported: Not Supported 00:28:58.696 Flexible Data Placement Supported: Not Supported 00:28:58.696 00:28:58.696 Controller Memory Buffer Support 00:28:58.696 ================================ 00:28:58.696 Supported: No 00:28:58.696 00:28:58.696 Persistent Memory Region Support 00:28:58.696 ================================ 00:28:58.696 Supported: No 00:28:58.696 00:28:58.696 Admin Command Set Attributes 00:28:58.696 ============================ 00:28:58.696 Security Send/Receive: Not Supported 00:28:58.696 Format NVM: Not Supported 00:28:58.696 Firmware Activate/Download: Not Supported 00:28:58.696 Namespace Management: Not Supported 00:28:58.696 Device Self-Test: Not Supported 00:28:58.696 Directives: Not Supported 00:28:58.696 NVMe-MI: Not Supported 00:28:58.696 Virtualization Management: Not Supported 00:28:58.696 Doorbell Buffer Config: Not Supported 00:28:58.696 Get LBA Status Capability: Not Supported 00:28:58.696 Command & Feature Lockdown Capability: Not Supported 00:28:58.696 Abort Command Limit: 1 00:28:58.696 Async Event Request Limit: 4 00:28:58.696 Number of Firmware Slots: N/A 00:28:58.696 Firmware Slot 1 Read-Only: N/A 00:28:58.696 Firmware Activation Without Reset: N/A 00:28:58.696 Multiple Update Detection Support: N/A 00:28:58.696 Firmware Update Granularity: No Information Provided 00:28:58.696 Per-Namespace SMART Log: No 00:28:58.696 Asymmetric Namespace Access Log Page: Not Supported 00:28:58.696 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:58.696 Command Effects Log Page: Not Supported 00:28:58.696 Get Log Page Extended Data: Supported 00:28:58.696 Telemetry Log Pages: Not Supported 00:28:58.696 Persistent Event Log Pages: Not Supported 00:28:58.696 Supported Log Pages Log Page: May Support 00:28:58.696 Commands Supported & Effects Log Page: Not Supported 00:28:58.696 Feature Identifiers & Effects Log Page:May Support 00:28:58.696 NVMe-MI Commands & Effects Log Page: May Support 00:28:58.696 Data Area 4 for Telemetry Log: Not Supported 00:28:58.696 Error Log Page Entries Supported: 128 00:28:58.696 Keep Alive: Not Supported 00:28:58.696 00:28:58.696 NVM Command Set Attributes 00:28:58.696 ========================== 00:28:58.696 Submission Queue Entry Size 00:28:58.696 Max: 1 00:28:58.696 Min: 1 00:28:58.696 Completion Queue Entry Size 00:28:58.696 Max: 1 00:28:58.696 Min: 1 00:28:58.696 Number of Namespaces: 0 00:28:58.696 Compare Command: Not Supported 00:28:58.696 Write Uncorrectable Command: Not Supported 00:28:58.696 Dataset Management Command: Not Supported 00:28:58.696 Write Zeroes Command: Not Supported 00:28:58.696 Set Features Save Field: Not Supported 00:28:58.696 Reservations: Not Supported 00:28:58.696 Timestamp: Not Supported 00:28:58.696 Copy: Not Supported 00:28:58.696 Volatile Write Cache: Not Present 00:28:58.696 Atomic Write Unit (Normal): 1 00:28:58.696 Atomic Write Unit (PFail): 1 00:28:58.696 Atomic Compare & Write Unit: 1 00:28:58.696 Fused Compare & Write: Supported 00:28:58.696 Scatter-Gather List 00:28:58.696 SGL Command Set: Supported 00:28:58.696 SGL Keyed: Supported 00:28:58.696 SGL Bit Bucket Descriptor: Not Supported 00:28:58.696 SGL Metadata Pointer: Not Supported 00:28:58.696 Oversized SGL: Not Supported 00:28:58.696 SGL Metadata Address: Not Supported 00:28:58.696 SGL Offset: Supported 00:28:58.696 Transport SGL Data Block: Not Supported 00:28:58.696 Replay Protected Memory Block: Not Supported 00:28:58.696 00:28:58.696 Firmware Slot Information 00:28:58.696 ========================= 00:28:58.696 Active slot: 0 00:28:58.696 00:28:58.696 00:28:58.696 Error Log 00:28:58.696 ========= 00:28:58.696 00:28:58.696 Active Namespaces 00:28:58.696 ================= 00:28:58.696 Discovery Log Page 00:28:58.696 ================== 00:28:58.696 Generation Counter: 2 00:28:58.696 Number of Records: 2 00:28:58.696 Record Format: 0 00:28:58.696 00:28:58.696 Discovery Log Entry 0 00:28:58.696 ---------------------- 00:28:58.696 Transport Type: 3 (TCP) 00:28:58.696 Address Family: 1 (IPv4) 00:28:58.696 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:58.696 Entry Flags: 00:28:58.696 Duplicate Returned Information: 1 00:28:58.696 Explicit Persistent Connection Support for Discovery: 1 00:28:58.696 Transport Requirements: 00:28:58.696 Secure Channel: Not Required 00:28:58.696 Port ID: 0 (0x0000) 00:28:58.696 Controller ID: 65535 (0xffff) 00:28:58.696 Admin Max SQ Size: 128 00:28:58.696 Transport Service Identifier: 4420 00:28:58.696 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:58.696 Transport Address: 10.0.0.2 00:28:58.696 Discovery Log Entry 1 00:28:58.696 ---------------------- 00:28:58.696 Transport Type: 3 (TCP) 00:28:58.696 Address Family: 1 (IPv4) 00:28:58.696 Subsystem Type: 2 (NVM Subsystem) 00:28:58.696 Entry Flags: 00:28:58.696 Duplicate Returned Information: 0 00:28:58.696 Explicit Persistent Connection Support for Discovery: 0 00:28:58.696 Transport Requirements: 00:28:58.696 Secure Channel: Not Required 00:28:58.696 Port ID: 0 (0x0000) 00:28:58.696 Controller ID: 65535 (0xffff) 00:28:58.696 Admin Max SQ Size: 128 00:28:58.696 Transport Service Identifier: 4420 00:28:58.696 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:58.696 Transport Address: 10.0.0.2 [2024-10-13 17:39:07.148351] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:58.696 [2024-10-13 17:39:07.148364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.696 [2024-10-13 17:39:07.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.696 [2024-10-13 17:39:07.148379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.696 [2024-10-13 17:39:07.148385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.696 [2024-10-13 17:39:07.148394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.696 [2024-10-13 17:39:07.148397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.696 [2024-10-13 17:39:07.148401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.696 [2024-10-13 17:39:07.148408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.696 [2024-10-13 17:39:07.148421] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.696 [2024-10-13 17:39:07.148608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.696 [2024-10-13 17:39:07.148615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.696 [2024-10-13 17:39:07.148618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.696 [2024-10-13 17:39:07.148622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.696 [2024-10-13 17:39:07.148629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.696 [2024-10-13 17:39:07.148633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.148637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.148643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.148656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.148842] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.148848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.148851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.148855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.148861] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:58.697 [2024-10-13 17:39:07.148865] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:58.697 [2024-10-13 17:39:07.148875] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.148879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.148882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.148889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.148899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.149022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.149028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.149031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.149046] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.149060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.149077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.149291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.149297] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.149301] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.149315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.149329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.149338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.149542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.149549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.149552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.149566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149569] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.149580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.149589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.149762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.149768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.149772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149775] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.149786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.149793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.149800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.149809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.150001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.150007] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.150010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150014] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.150024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150032] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.150038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.150050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.150233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.150240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.150243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.150257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.150271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.150281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.150470] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.150476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.150479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.150493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150497] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150500] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.150507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.150517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.150658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.150664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.150668] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.150682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150686] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.150696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.150705] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.150890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.150896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.150900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.150914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.150921] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.150928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.150937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.155070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.155078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.155082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.155085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.155096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.155099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.155103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6c340) 00:28:58.697 [2024-10-13 17:39:07.155110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.697 [2024-10-13 17:39:07.155121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ad2d90, cid 3, qid 0 00:28:58.697 [2024-10-13 17:39:07.155295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.697 [2024-10-13 17:39:07.155302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.697 [2024-10-13 17:39:07.155305] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.697 [2024-10-13 17:39:07.155309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ad2d90) on tqpair=0x1a6c340 00:28:58.697 [2024-10-13 17:39:07.155317] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:58.697 00:28:58.697 17:39:07 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:58.698 [2024-10-13 17:39:07.190737] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:58.698 [2024-10-13 17:39:07.190779] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344770 ] 00:28:58.698 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.961 [2024-10-13 17:39:07.224602] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:58.961 [2024-10-13 17:39:07.224644] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:58.961 [2024-10-13 17:39:07.224649] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:58.961 [2024-10-13 17:39:07.224659] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:58.961 [2024-10-13 17:39:07.224666] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:58.961 [2024-10-13 17:39:07.225070] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:58.961 [2024-10-13 17:39:07.225098] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x916340 0 00:28:58.961 [2024-10-13 17:39:07.235070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:58.961 [2024-10-13 17:39:07.235079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:58.961 [2024-10-13 17:39:07.235083] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:58.961 [2024-10-13 17:39:07.235087] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:58.961 [2024-10-13 17:39:07.235117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.235122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.235126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.961 [2024-10-13 17:39:07.235139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:58.961 [2024-10-13 17:39:07.235154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.961 [2024-10-13 17:39:07.243070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.961 [2024-10-13 17:39:07.243079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.961 [2024-10-13 17:39:07.243083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.961 [2024-10-13 17:39:07.243099] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:58.961 [2024-10-13 17:39:07.243104] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:58.961 [2024-10-13 17:39:07.243110] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:58.961 [2024-10-13 17:39:07.243120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.961 [2024-10-13 17:39:07.243134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.961 [2024-10-13 17:39:07.243147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.961 [2024-10-13 17:39:07.243345] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.961 [2024-10-13 17:39:07.243351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.961 [2024-10-13 17:39:07.243355] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.961 [2024-10-13 17:39:07.243363] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:58.961 [2024-10-13 17:39:07.243371] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:58.961 [2024-10-13 17:39:07.243377] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.961 [2024-10-13 17:39:07.243392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.961 [2024-10-13 17:39:07.243402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.961 [2024-10-13 17:39:07.243553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.961 [2024-10-13 17:39:07.243559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.961 [2024-10-13 17:39:07.243563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.961 [2024-10-13 17:39:07.243571] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:58.961 [2024-10-13 17:39:07.243579] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:58.961 [2024-10-13 17:39:07.243586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.961 [2024-10-13 17:39:07.243600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.961 [2024-10-13 17:39:07.243613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.961 [2024-10-13 17:39:07.243774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.961 [2024-10-13 17:39:07.243781] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.961 [2024-10-13 17:39:07.243784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.961 [2024-10-13 17:39:07.243792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:58.961 [2024-10-13 17:39:07.243802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243805] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.961 [2024-10-13 17:39:07.243809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.961 [2024-10-13 17:39:07.243816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.961 [2024-10-13 17:39:07.243826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.961 [2024-10-13 17:39:07.244007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.961 [2024-10-13 17:39:07.244013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.961 [2024-10-13 17:39:07.244016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.962 [2024-10-13 17:39:07.244024] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:58.962 [2024-10-13 17:39:07.244029] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:58.962 [2024-10-13 17:39:07.244037] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:58.962 [2024-10-13 17:39:07.244142] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:58.962 [2024-10-13 17:39:07.244146] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:58.962 [2024-10-13 17:39:07.244153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.244167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.962 [2024-10-13 17:39:07.244178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.962 [2024-10-13 17:39:07.244339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.962 [2024-10-13 17:39:07.244345] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.962 [2024-10-13 17:39:07.244349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244352] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.962 [2024-10-13 17:39:07.244357] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:58.962 [2024-10-13 17:39:07.244366] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.244380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.962 [2024-10-13 17:39:07.244393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.962 [2024-10-13 17:39:07.244582] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.962 [2024-10-13 17:39:07.244588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.962 [2024-10-13 17:39:07.244591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.962 [2024-10-13 17:39:07.244600] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:58.962 [2024-10-13 17:39:07.244604] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.244612] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:58.962 [2024-10-13 17:39:07.244619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.244627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.244641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.962 [2024-10-13 17:39:07.244651] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.962 [2024-10-13 17:39:07.244852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.962 [2024-10-13 17:39:07.244858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.962 [2024-10-13 17:39:07.244862] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244866] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=4096, cccid=0 00:28:58.962 [2024-10-13 17:39:07.244870] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97c970) on tqpair(0x916340): expected_datao=0, payload_size=4096 00:28:58.962 [2024-10-13 17:39:07.244878] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.244882] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.962 [2024-10-13 17:39:07.245067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.962 [2024-10-13 17:39:07.245071] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245075] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.962 [2024-10-13 17:39:07.245081] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:58.962 [2024-10-13 17:39:07.245086] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:58.962 [2024-10-13 17:39:07.245090] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:58.962 [2024-10-13 17:39:07.245094] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:58.962 [2024-10-13 17:39:07.245099] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:58.962 [2024-10-13 17:39:07.245103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245114] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245126] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:58.962 [2024-10-13 17:39:07.245148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.962 [2024-10-13 17:39:07.245307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.962 [2024-10-13 17:39:07.245313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.962 [2024-10-13 17:39:07.245317] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97c970) on tqpair=0x916340 00:28:58.962 [2024-10-13 17:39:07.245327] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.962 [2024-10-13 17:39:07.245346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245350] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245353] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.962 [2024-10-13 17:39:07.245365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245369] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245372] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.962 [2024-10-13 17:39:07.245384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.962 [2024-10-13 17:39:07.245401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245412] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245418] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245422] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245425] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.962 [2024-10-13 17:39:07.245443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97c970, cid 0, qid 0 00:28:58.962 [2024-10-13 17:39:07.245449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cad0, cid 1, qid 0 00:28:58.962 [2024-10-13 17:39:07.245453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cc30, cid 2, qid 0 00:28:58.962 [2024-10-13 17:39:07.245460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.962 [2024-10-13 17:39:07.245465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.962 [2024-10-13 17:39:07.245664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.962 [2024-10-13 17:39:07.245671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.962 [2024-10-13 17:39:07.245674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.962 [2024-10-13 17:39:07.245682] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:58.962 [2024-10-13 17:39:07.245687] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245695] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:58.962 [2024-10-13 17:39:07.245709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.962 [2024-10-13 17:39:07.245723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:58.962 [2024-10-13 17:39:07.245733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.962 [2024-10-13 17:39:07.245913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.962 [2024-10-13 17:39:07.245919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.962 [2024-10-13 17:39:07.245922] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.962 [2024-10-13 17:39:07.245926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.245987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.245995] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.246002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.246016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.246026] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.963 [2024-10-13 17:39:07.246222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.963 [2024-10-13 17:39:07.246229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.963 [2024-10-13 17:39:07.246233] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246237] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=4096, cccid=4 00:28:58.963 [2024-10-13 17:39:07.246241] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97cef0) on tqpair(0x916340): expected_datao=0, payload_size=4096 00:28:58.963 [2024-10-13 17:39:07.246249] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246252] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.246403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.246407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.246421] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:58.963 [2024-10-13 17:39:07.246433] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.246442] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.246449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.246463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.246476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.963 [2024-10-13 17:39:07.246638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.963 [2024-10-13 17:39:07.246645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.963 [2024-10-13 17:39:07.246648] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246652] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=4096, cccid=4 00:28:58.963 [2024-10-13 17:39:07.246656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97cef0) on tqpair(0x916340): expected_datao=0, payload_size=4096 00:28:58.963 [2024-10-13 17:39:07.246672] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246676] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.246836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.246839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.246856] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.246865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.246872] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246876] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.246879] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.246886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.246896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.963 [2024-10-13 17:39:07.251069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.963 [2024-10-13 17:39:07.251078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.963 [2024-10-13 17:39:07.251081] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251085] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=4096, cccid=4 00:28:58.963 [2024-10-13 17:39:07.251092] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97cef0) on tqpair(0x916340): expected_datao=0, payload_size=4096 00:28:58.963 [2024-10-13 17:39:07.251099] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251103] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.251114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.251118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.251129] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.251137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.251145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.251151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.251156] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.251161] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:58.963 [2024-10-13 17:39:07.251166] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:58.963 [2024-10-13 17:39:07.251171] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:58.963 [2024-10-13 17:39:07.251184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.251198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.251204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.251217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.963 [2024-10-13 17:39:07.251231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.963 [2024-10-13 17:39:07.251237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d050, cid 5, qid 0 00:28:58.963 [2024-10-13 17:39:07.251396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.251402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.251405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.251415] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.251421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.251425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d050) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.251438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.251477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.251487] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d050, cid 5, qid 0 00:28:58.963 [2024-10-13 17:39:07.251677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.251684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.251687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d050) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.251700] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251704] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.251713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.251723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d050, cid 5, qid 0 00:28:58.963 [2024-10-13 17:39:07.251870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.251877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.251880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d050) on tqpair=0x916340 00:28:58.963 [2024-10-13 17:39:07.251893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251896] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.251900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x916340) 00:28:58.963 [2024-10-13 17:39:07.251906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.963 [2024-10-13 17:39:07.251916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d050, cid 5, qid 0 00:28:58.963 [2024-10-13 17:39:07.252096] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.963 [2024-10-13 17:39:07.252102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.963 [2024-10-13 17:39:07.252106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.963 [2024-10-13 17:39:07.252110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d050) on tqpair=0x916340 00:28:58.964 [2024-10-13 17:39:07.252120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x916340) 00:28:58.964 [2024-10-13 17:39:07.252134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.964 [2024-10-13 17:39:07.252141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x916340) 00:28:58.964 [2024-10-13 17:39:07.252154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.964 [2024-10-13 17:39:07.252161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252166] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x916340) 00:28:58.964 [2024-10-13 17:39:07.252176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.964 [2024-10-13 17:39:07.252183] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x916340) 00:28:58.964 [2024-10-13 17:39:07.252196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.964 [2024-10-13 17:39:07.252208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d050, cid 5, qid 0 00:28:58.964 [2024-10-13 17:39:07.252213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cef0, cid 4, qid 0 00:28:58.964 [2024-10-13 17:39:07.252218] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d1b0, cid 6, qid 0 00:28:58.964 [2024-10-13 17:39:07.252223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d310, cid 7, qid 0 00:28:58.964 [2024-10-13 17:39:07.252443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.964 [2024-10-13 17:39:07.252450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.964 [2024-10-13 17:39:07.252453] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252457] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=8192, cccid=5 00:28:58.964 [2024-10-13 17:39:07.252461] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97d050) on tqpair(0x916340): expected_datao=0, payload_size=8192 00:28:58.964 [2024-10-13 17:39:07.252562] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252567] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.964 [2024-10-13 17:39:07.252578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.964 [2024-10-13 17:39:07.252581] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252585] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=512, cccid=4 00:28:58.964 [2024-10-13 17:39:07.252590] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97cef0) on tqpair(0x916340): expected_datao=0, payload_size=512 00:28:58.964 [2024-10-13 17:39:07.252597] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252600] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252606] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.964 [2024-10-13 17:39:07.252612] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.964 [2024-10-13 17:39:07.252615] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252618] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=512, cccid=6 00:28:58.964 [2024-10-13 17:39:07.252623] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97d1b0) on tqpair(0x916340): expected_datao=0, payload_size=512 00:28:58.964 [2024-10-13 17:39:07.252630] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252633] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:58.964 [2024-10-13 17:39:07.252645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:58.964 [2024-10-13 17:39:07.252648] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252651] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x916340): datao=0, datal=4096, cccid=7 00:28:58.964 [2024-10-13 17:39:07.252658] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x97d310) on tqpair(0x916340): expected_datao=0, payload_size=4096 00:28:58.964 [2024-10-13 17:39:07.252670] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252673] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.964 [2024-10-13 17:39:07.252839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.964 [2024-10-13 17:39:07.252842] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d050) on tqpair=0x916340 00:28:58.964 [2024-10-13 17:39:07.252859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.964 [2024-10-13 17:39:07.252865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.964 [2024-10-13 17:39:07.252868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cef0) on tqpair=0x916340 00:28:58.964 [2024-10-13 17:39:07.252880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.964 [2024-10-13 17:39:07.252886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.964 [2024-10-13 17:39:07.252890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d1b0) on tqpair=0x916340 00:28:58.964 [2024-10-13 17:39:07.252900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.964 [2024-10-13 17:39:07.252906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.964 [2024-10-13 17:39:07.252909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.964 [2024-10-13 17:39:07.252913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d310) on tqpair=0x916340 00:28:58.964 ===================================================== 00:28:58.964 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.964 ===================================================== 00:28:58.964 Controller Capabilities/Features 00:28:58.964 ================================ 00:28:58.964 Vendor ID: 8086 00:28:58.964 Subsystem Vendor ID: 8086 00:28:58.964 Serial Number: SPDK00000000000001 00:28:58.964 Model Number: SPDK bdev Controller 00:28:58.964 Firmware Version: 24.01.1 00:28:58.964 Recommended Arb Burst: 6 00:28:58.964 IEEE OUI Identifier: e4 d2 5c 00:28:58.964 Multi-path I/O 00:28:58.964 May have multiple subsystem ports: Yes 00:28:58.964 May have multiple controllers: Yes 00:28:58.964 Associated with SR-IOV VF: No 00:28:58.964 Max Data Transfer Size: 131072 00:28:58.964 Max Number of Namespaces: 32 00:28:58.964 Max Number of I/O Queues: 127 00:28:58.964 NVMe Specification Version (VS): 1.3 00:28:58.964 NVMe Specification Version (Identify): 1.3 00:28:58.964 Maximum Queue Entries: 128 00:28:58.964 Contiguous Queues Required: Yes 00:28:58.964 Arbitration Mechanisms Supported 00:28:58.964 Weighted Round Robin: Not Supported 00:28:58.964 Vendor Specific: Not Supported 00:28:58.964 Reset Timeout: 15000 ms 00:28:58.964 Doorbell Stride: 4 bytes 00:28:58.964 NVM Subsystem Reset: Not Supported 00:28:58.964 Command Sets Supported 00:28:58.964 NVM Command Set: Supported 00:28:58.964 Boot Partition: Not Supported 00:28:58.964 Memory Page Size Minimum: 4096 bytes 00:28:58.964 Memory Page Size Maximum: 4096 bytes 00:28:58.964 Persistent Memory Region: Not Supported 00:28:58.964 Optional Asynchronous Events Supported 00:28:58.964 Namespace Attribute Notices: Supported 00:28:58.964 Firmware Activation Notices: Not Supported 00:28:58.964 ANA Change Notices: Not Supported 00:28:58.964 PLE Aggregate Log Change Notices: Not Supported 00:28:58.964 LBA Status Info Alert Notices: Not Supported 00:28:58.964 EGE Aggregate Log Change Notices: Not Supported 00:28:58.964 Normal NVM Subsystem Shutdown event: Not Supported 00:28:58.964 Zone Descriptor Change Notices: Not Supported 00:28:58.964 Discovery Log Change Notices: Not Supported 00:28:58.964 Controller Attributes 00:28:58.964 128-bit Host Identifier: Supported 00:28:58.964 Non-Operational Permissive Mode: Not Supported 00:28:58.964 NVM Sets: Not Supported 00:28:58.964 Read Recovery Levels: Not Supported 00:28:58.964 Endurance Groups: Not Supported 00:28:58.964 Predictable Latency Mode: Not Supported 00:28:58.964 Traffic Based Keep ALive: Not Supported 00:28:58.964 Namespace Granularity: Not Supported 00:28:58.964 SQ Associations: Not Supported 00:28:58.964 UUID List: Not Supported 00:28:58.964 Multi-Domain Subsystem: Not Supported 00:28:58.964 Fixed Capacity Management: Not Supported 00:28:58.964 Variable Capacity Management: Not Supported 00:28:58.964 Delete Endurance Group: Not Supported 00:28:58.964 Delete NVM Set: Not Supported 00:28:58.964 Extended LBA Formats Supported: Not Supported 00:28:58.964 Flexible Data Placement Supported: Not Supported 00:28:58.964 00:28:58.964 Controller Memory Buffer Support 00:28:58.964 ================================ 00:28:58.964 Supported: No 00:28:58.964 00:28:58.964 Persistent Memory Region Support 00:28:58.964 ================================ 00:28:58.964 Supported: No 00:28:58.964 00:28:58.964 Admin Command Set Attributes 00:28:58.964 ============================ 00:28:58.964 Security Send/Receive: Not Supported 00:28:58.964 Format NVM: Not Supported 00:28:58.964 Firmware Activate/Download: Not Supported 00:28:58.964 Namespace Management: Not Supported 00:28:58.964 Device Self-Test: Not Supported 00:28:58.964 Directives: Not Supported 00:28:58.964 NVMe-MI: Not Supported 00:28:58.964 Virtualization Management: Not Supported 00:28:58.964 Doorbell Buffer Config: Not Supported 00:28:58.964 Get LBA Status Capability: Not Supported 00:28:58.964 Command & Feature Lockdown Capability: Not Supported 00:28:58.964 Abort Command Limit: 4 00:28:58.964 Async Event Request Limit: 4 00:28:58.964 Number of Firmware Slots: N/A 00:28:58.964 Firmware Slot 1 Read-Only: N/A 00:28:58.964 Firmware Activation Without Reset: N/A 00:28:58.965 Multiple Update Detection Support: N/A 00:28:58.965 Firmware Update Granularity: No Information Provided 00:28:58.965 Per-Namespace SMART Log: No 00:28:58.965 Asymmetric Namespace Access Log Page: Not Supported 00:28:58.965 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:58.965 Command Effects Log Page: Supported 00:28:58.965 Get Log Page Extended Data: Supported 00:28:58.965 Telemetry Log Pages: Not Supported 00:28:58.965 Persistent Event Log Pages: Not Supported 00:28:58.965 Supported Log Pages Log Page: May Support 00:28:58.965 Commands Supported & Effects Log Page: Not Supported 00:28:58.965 Feature Identifiers & Effects Log Page:May Support 00:28:58.965 NVMe-MI Commands & Effects Log Page: May Support 00:28:58.965 Data Area 4 for Telemetry Log: Not Supported 00:28:58.965 Error Log Page Entries Supported: 128 00:28:58.965 Keep Alive: Supported 00:28:58.965 Keep Alive Granularity: 10000 ms 00:28:58.965 00:28:58.965 NVM Command Set Attributes 00:28:58.965 ========================== 00:28:58.965 Submission Queue Entry Size 00:28:58.965 Max: 64 00:28:58.965 Min: 64 00:28:58.965 Completion Queue Entry Size 00:28:58.965 Max: 16 00:28:58.965 Min: 16 00:28:58.965 Number of Namespaces: 32 00:28:58.965 Compare Command: Supported 00:28:58.965 Write Uncorrectable Command: Not Supported 00:28:58.965 Dataset Management Command: Supported 00:28:58.965 Write Zeroes Command: Supported 00:28:58.965 Set Features Save Field: Not Supported 00:28:58.965 Reservations: Supported 00:28:58.965 Timestamp: Not Supported 00:28:58.965 Copy: Supported 00:28:58.965 Volatile Write Cache: Present 00:28:58.965 Atomic Write Unit (Normal): 1 00:28:58.965 Atomic Write Unit (PFail): 1 00:28:58.965 Atomic Compare & Write Unit: 1 00:28:58.965 Fused Compare & Write: Supported 00:28:58.965 Scatter-Gather List 00:28:58.965 SGL Command Set: Supported 00:28:58.965 SGL Keyed: Supported 00:28:58.965 SGL Bit Bucket Descriptor: Not Supported 00:28:58.965 SGL Metadata Pointer: Not Supported 00:28:58.965 Oversized SGL: Not Supported 00:28:58.965 SGL Metadata Address: Not Supported 00:28:58.965 SGL Offset: Supported 00:28:58.965 Transport SGL Data Block: Not Supported 00:28:58.965 Replay Protected Memory Block: Not Supported 00:28:58.965 00:28:58.965 Firmware Slot Information 00:28:58.965 ========================= 00:28:58.965 Active slot: 1 00:28:58.965 Slot 1 Firmware Revision: 24.01.1 00:28:58.965 00:28:58.965 00:28:58.965 Commands Supported and Effects 00:28:58.965 ============================== 00:28:58.965 Admin Commands 00:28:58.965 -------------- 00:28:58.965 Get Log Page (02h): Supported 00:28:58.965 Identify (06h): Supported 00:28:58.965 Abort (08h): Supported 00:28:58.965 Set Features (09h): Supported 00:28:58.965 Get Features (0Ah): Supported 00:28:58.965 Asynchronous Event Request (0Ch): Supported 00:28:58.965 Keep Alive (18h): Supported 00:28:58.965 I/O Commands 00:28:58.965 ------------ 00:28:58.965 Flush (00h): Supported LBA-Change 00:28:58.965 Write (01h): Supported LBA-Change 00:28:58.965 Read (02h): Supported 00:28:58.965 Compare (05h): Supported 00:28:58.965 Write Zeroes (08h): Supported LBA-Change 00:28:58.965 Dataset Management (09h): Supported LBA-Change 00:28:58.965 Copy (19h): Supported LBA-Change 00:28:58.965 Unknown (79h): Supported LBA-Change 00:28:58.965 Unknown (7Ah): Supported 00:28:58.965 00:28:58.965 Error Log 00:28:58.965 ========= 00:28:58.965 00:28:58.965 Arbitration 00:28:58.965 =========== 00:28:58.965 Arbitration Burst: 1 00:28:58.965 00:28:58.965 Power Management 00:28:58.965 ================ 00:28:58.965 Number of Power States: 1 00:28:58.965 Current Power State: Power State #0 00:28:58.965 Power State #0: 00:28:58.965 Max Power: 0.00 W 00:28:58.965 Non-Operational State: Operational 00:28:58.965 Entry Latency: Not Reported 00:28:58.965 Exit Latency: Not Reported 00:28:58.965 Relative Read Throughput: 0 00:28:58.965 Relative Read Latency: 0 00:28:58.965 Relative Write Throughput: 0 00:28:58.965 Relative Write Latency: 0 00:28:58.965 Idle Power: Not Reported 00:28:58.965 Active Power: Not Reported 00:28:58.965 Non-Operational Permissive Mode: Not Supported 00:28:58.965 00:28:58.965 Health Information 00:28:58.965 ================== 00:28:58.965 Critical Warnings: 00:28:58.965 Available Spare Space: OK 00:28:58.965 Temperature: OK 00:28:58.965 Device Reliability: OK 00:28:58.965 Read Only: No 00:28:58.965 Volatile Memory Backup: OK 00:28:58.965 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:58.965 Temperature Threshold: [2024-10-13 17:39:07.253018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x916340) 00:28:58.965 [2024-10-13 17:39:07.253034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.965 [2024-10-13 17:39:07.253045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97d310, cid 7, qid 0 00:28:58.965 [2024-10-13 17:39:07.253226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.965 [2024-10-13 17:39:07.253233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.965 [2024-10-13 17:39:07.253236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97d310) on tqpair=0x916340 00:28:58.965 [2024-10-13 17:39:07.253268] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:58.965 [2024-10-13 17:39:07.253278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.965 [2024-10-13 17:39:07.253285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.965 [2024-10-13 17:39:07.253291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.965 [2024-10-13 17:39:07.253297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.965 [2024-10-13 17:39:07.253305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.965 [2024-10-13 17:39:07.253321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.965 [2024-10-13 17:39:07.253333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.965 [2024-10-13 17:39:07.253514] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.965 [2024-10-13 17:39:07.253520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.965 [2024-10-13 17:39:07.253524] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253527] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.965 [2024-10-13 17:39:07.253534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.965 [2024-10-13 17:39:07.253548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.965 [2024-10-13 17:39:07.253561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.965 [2024-10-13 17:39:07.253707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.965 [2024-10-13 17:39:07.253713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.965 [2024-10-13 17:39:07.253716] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.965 [2024-10-13 17:39:07.253725] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:58.965 [2024-10-13 17:39:07.253729] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:58.965 [2024-10-13 17:39:07.253738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.965 [2024-10-13 17:39:07.253745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.965 [2024-10-13 17:39:07.253752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.253762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.253904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.253910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.253914] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.253917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.253927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.253931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.253935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.966 [2024-10-13 17:39:07.253941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.253951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.254140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.254147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.254150] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254154] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.254163] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.966 [2024-10-13 17:39:07.254179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.254190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.254361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.254367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.254370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.254383] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.966 [2024-10-13 17:39:07.254397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.254407] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.254580] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.254586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.254590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254594] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.254603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.966 [2024-10-13 17:39:07.254617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.254626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.254812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.254818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.254822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.254835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.254842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.966 [2024-10-13 17:39:07.254849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.254858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.255037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.255044] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.255047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.255051] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.255060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.259069] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.259076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x916340) 00:28:58.966 [2024-10-13 17:39:07.259083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.966 [2024-10-13 17:39:07.259094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x97cd90, cid 3, qid 0 00:28:58.966 [2024-10-13 17:39:07.259310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:58.966 [2024-10-13 17:39:07.259317] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:58.966 [2024-10-13 17:39:07.259320] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:58.966 [2024-10-13 17:39:07.259324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x97cd90) on tqpair=0x916340 00:28:58.966 [2024-10-13 17:39:07.259331] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:58.966 0 Kelvin (-273 Celsius) 00:28:58.966 Available Spare: 0% 00:28:58.966 Available Spare Threshold: 0% 00:28:58.966 Life Percentage Used: 0% 00:28:58.966 Data Units Read: 0 00:28:58.966 Data Units Written: 0 00:28:58.966 Host Read Commands: 0 00:28:58.966 Host Write Commands: 0 00:28:58.966 Controller Busy Time: 0 minutes 00:28:58.966 Power Cycles: 0 00:28:58.966 Power On Hours: 0 hours 00:28:58.966 Unsafe Shutdowns: 0 00:28:58.966 Unrecoverable Media Errors: 0 00:28:58.966 Lifetime Error Log Entries: 0 00:28:58.966 Warning Temperature Time: 0 minutes 00:28:58.966 Critical Temperature Time: 0 minutes 00:28:58.966 00:28:58.966 Number of Queues 00:28:58.966 ================ 00:28:58.966 Number of I/O Submission Queues: 127 00:28:58.966 Number of I/O Completion Queues: 127 00:28:58.966 00:28:58.966 Active Namespaces 00:28:58.966 ================= 00:28:58.966 Namespace ID:1 00:28:58.966 Error Recovery Timeout: Unlimited 00:28:58.966 Command Set Identifier: NVM (00h) 00:28:58.966 Deallocate: Supported 00:28:58.966 Deallocated/Unwritten Error: Not Supported 00:28:58.966 Deallocated Read Value: Unknown 00:28:58.966 Deallocate in Write Zeroes: Not Supported 00:28:58.966 Deallocated Guard Field: 0xFFFF 00:28:58.966 Flush: Supported 00:28:58.966 Reservation: Supported 00:28:58.966 Namespace Sharing Capabilities: Multiple Controllers 00:28:58.966 Size (in LBAs): 131072 (0GiB) 00:28:58.966 Capacity (in LBAs): 131072 (0GiB) 00:28:58.966 Utilization (in LBAs): 131072 (0GiB) 00:28:58.966 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:58.966 EUI64: ABCDEF0123456789 00:28:58.966 UUID: a83f9f93-703e-4195-8f00-3f09ca171fbb 00:28:58.966 Thin Provisioning: Not Supported 00:28:58.966 Per-NS Atomic Units: Yes 00:28:58.966 Atomic Boundary Size (Normal): 0 00:28:58.966 Atomic Boundary Size (PFail): 0 00:28:58.966 Atomic Boundary Offset: 0 00:28:58.966 Maximum Single Source Range Length: 65535 00:28:58.966 Maximum Copy Length: 65535 00:28:58.966 Maximum Source Range Count: 1 00:28:58.966 NGUID/EUI64 Never Reused: No 00:28:58.966 Namespace Write Protected: No 00:28:58.966 Number of LBA Formats: 1 00:28:58.966 Current LBA Format: LBA Format #00 00:28:58.966 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:58.966 00:28:58.966 17:39:07 -- host/identify.sh@51 -- # sync 00:28:58.966 17:39:07 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.966 17:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.966 17:39:07 -- common/autotest_common.sh@10 -- # set +x 00:28:58.966 17:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.966 17:39:07 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:58.966 17:39:07 -- host/identify.sh@56 -- # nvmftestfini 00:28:58.966 17:39:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:58.966 17:39:07 -- nvmf/common.sh@116 -- # sync 00:28:58.966 17:39:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:58.966 17:39:07 -- nvmf/common.sh@119 -- # set +e 00:28:58.966 17:39:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:58.966 17:39:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:58.966 rmmod nvme_tcp 00:28:58.966 rmmod nvme_fabrics 00:28:58.966 rmmod nvme_keyring 00:28:58.966 17:39:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:58.966 17:39:07 -- nvmf/common.sh@123 -- # set -e 00:28:58.966 17:39:07 -- nvmf/common.sh@124 -- # return 0 00:28:58.966 17:39:07 -- nvmf/common.sh@477 -- # '[' -n 3344421 ']' 00:28:58.966 17:39:07 -- nvmf/common.sh@478 -- # killprocess 3344421 00:28:58.966 17:39:07 -- common/autotest_common.sh@926 -- # '[' -z 3344421 ']' 00:28:58.966 17:39:07 -- common/autotest_common.sh@930 -- # kill -0 3344421 00:28:58.966 17:39:07 -- common/autotest_common.sh@931 -- # uname 00:28:58.966 17:39:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:58.966 17:39:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3344421 00:28:58.966 17:39:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:58.966 17:39:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:58.966 17:39:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3344421' 00:28:58.966 killing process with pid 3344421 00:28:58.966 17:39:07 -- common/autotest_common.sh@945 -- # kill 3344421 00:28:58.966 [2024-10-13 17:39:07.419058] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:58.966 17:39:07 -- common/autotest_common.sh@950 -- # wait 3344421 00:28:59.227 17:39:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:59.227 17:39:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:59.227 17:39:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:59.227 17:39:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.227 17:39:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:59.227 17:39:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.227 17:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.227 17:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.140 17:39:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:01.140 00:29:01.140 real 0m11.200s 00:29:01.140 user 0m8.006s 00:29:01.140 sys 0m5.834s 00:29:01.140 17:39:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.140 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:29:01.140 ************************************ 00:29:01.140 END TEST nvmf_identify 00:29:01.140 ************************************ 00:29:01.415 17:39:09 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:01.415 17:39:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:01.415 17:39:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.415 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:29:01.415 ************************************ 00:29:01.415 START TEST nvmf_perf 00:29:01.415 ************************************ 00:29:01.415 17:39:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:01.415 * Looking for test storage... 00:29:01.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.415 17:39:09 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.415 17:39:09 -- nvmf/common.sh@7 -- # uname -s 00:29:01.415 17:39:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.415 17:39:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.415 17:39:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.415 17:39:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.415 17:39:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.415 17:39:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.415 17:39:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.416 17:39:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.416 17:39:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.416 17:39:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.416 17:39:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:01.416 17:39:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:01.416 17:39:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.416 17:39:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.416 17:39:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.416 17:39:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.416 17:39:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.416 17:39:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.416 17:39:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.416 17:39:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.416 17:39:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.416 17:39:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.416 17:39:09 -- paths/export.sh@5 -- # export PATH 00:29:01.416 17:39:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.416 17:39:09 -- nvmf/common.sh@46 -- # : 0 00:29:01.416 17:39:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:01.416 17:39:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:01.416 17:39:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:01.416 17:39:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.416 17:39:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.416 17:39:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:01.416 17:39:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:01.416 17:39:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:01.416 17:39:09 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:01.416 17:39:09 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:01.416 17:39:09 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.416 17:39:09 -- host/perf.sh@17 -- # nvmftestinit 00:29:01.416 17:39:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:01.416 17:39:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.416 17:39:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:01.416 17:39:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:01.416 17:39:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:01.416 17:39:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.416 17:39:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.416 17:39:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.416 17:39:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:01.416 17:39:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:01.416 17:39:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:01.416 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:29:09.608 17:39:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:09.608 17:39:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:09.608 17:39:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:09.608 17:39:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:09.608 17:39:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:09.608 17:39:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:09.608 17:39:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:09.608 17:39:16 -- nvmf/common.sh@294 -- # net_devs=() 00:29:09.608 17:39:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:09.608 17:39:16 -- nvmf/common.sh@295 -- # e810=() 00:29:09.608 17:39:16 -- nvmf/common.sh@295 -- # local -ga e810 00:29:09.608 17:39:16 -- nvmf/common.sh@296 -- # x722=() 00:29:09.608 17:39:16 -- nvmf/common.sh@296 -- # local -ga x722 00:29:09.608 17:39:16 -- nvmf/common.sh@297 -- # mlx=() 00:29:09.608 17:39:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:09.608 17:39:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.608 17:39:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:09.608 17:39:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:09.608 17:39:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:09.608 17:39:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:09.608 17:39:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:09.608 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:09.608 17:39:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:09.608 17:39:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:09.608 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:09.608 17:39:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:09.608 17:39:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:09.608 17:39:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:09.608 17:39:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.608 17:39:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:09.609 17:39:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.609 17:39:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:09.609 Found net devices under 0000:31:00.0: cvl_0_0 00:29:09.609 17:39:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.609 17:39:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:09.609 17:39:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.609 17:39:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:09.609 17:39:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.609 17:39:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:09.609 Found net devices under 0000:31:00.1: cvl_0_1 00:29:09.609 17:39:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.609 17:39:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:09.609 17:39:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:09.609 17:39:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:09.609 17:39:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:09.609 17:39:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:09.609 17:39:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.609 17:39:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.609 17:39:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.609 17:39:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:09.609 17:39:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.609 17:39:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.609 17:39:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:09.609 17:39:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.609 17:39:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.609 17:39:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:09.609 17:39:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:09.609 17:39:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.609 17:39:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.609 17:39:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.609 17:39:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.609 17:39:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:09.609 17:39:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.609 17:39:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.609 17:39:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.609 17:39:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:09.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:29:09.609 00:29:09.609 --- 10.0.0.2 ping statistics --- 00:29:09.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.609 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:29:09.609 17:39:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:29:09.609 00:29:09.609 --- 10.0.0.1 ping statistics --- 00:29:09.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.609 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:09.609 17:39:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.609 17:39:17 -- nvmf/common.sh@410 -- # return 0 00:29:09.609 17:39:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:09.609 17:39:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.609 17:39:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:09.609 17:39:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:09.609 17:39:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.609 17:39:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:09.609 17:39:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:09.609 17:39:17 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:09.609 17:39:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:09.609 17:39:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:09.609 17:39:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.609 17:39:17 -- nvmf/common.sh@469 -- # nvmfpid=3349507 00:29:09.609 17:39:17 -- nvmf/common.sh@470 -- # waitforlisten 3349507 00:29:09.609 17:39:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.609 17:39:17 -- common/autotest_common.sh@819 -- # '[' -z 3349507 ']' 00:29:09.609 17:39:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.609 17:39:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:09.609 17:39:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.609 17:39:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:09.609 17:39:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.609 [2024-10-13 17:39:17.421933] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:09.609 [2024-10-13 17:39:17.421997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.609 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.609 [2024-10-13 17:39:17.496340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.609 [2024-10-13 17:39:17.533960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:09.609 [2024-10-13 17:39:17.534115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.609 [2024-10-13 17:39:17.534127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.609 [2024-10-13 17:39:17.534135] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.609 [2024-10-13 17:39:17.534229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.609 [2024-10-13 17:39:17.534381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.609 [2024-10-13 17:39:17.534548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.609 [2024-10-13 17:39:17.534550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.869 17:39:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:09.869 17:39:18 -- common/autotest_common.sh@852 -- # return 0 00:29:09.869 17:39:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:09.869 17:39:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:09.869 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:29:09.869 17:39:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.869 17:39:18 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:09.869 17:39:18 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:10.441 17:39:18 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:10.441 17:39:18 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:10.441 17:39:18 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:29:10.441 17:39:18 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.705 17:39:19 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:10.705 17:39:19 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:29:10.705 17:39:19 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:10.705 17:39:19 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:10.705 17:39:19 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:10.974 [2024-10-13 17:39:19.244292] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.974 17:39:19 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.974 17:39:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:10.974 17:39:19 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.234 17:39:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:11.235 17:39:19 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:11.494 17:39:19 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.494 [2024-10-13 17:39:19.942920] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.494 17:39:19 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.754 17:39:20 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:11.754 17:39:20 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:11.754 17:39:20 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:11.754 17:39:20 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:13.136 Initializing NVMe Controllers 00:29:13.136 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:13.136 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:13.136 Initialization complete. Launching workers. 00:29:13.136 ======================================================== 00:29:13.136 Latency(us) 00:29:13.136 Device Information : IOPS MiB/s Average min max 00:29:13.136 PCIE (0000:65:00.0) NSID 1 from core 0: 80368.69 313.94 397.47 13.26 5220.35 00:29:13.136 ======================================================== 00:29:13.136 Total : 80368.69 313.94 397.47 13.26 5220.35 00:29:13.136 00:29:13.136 17:39:21 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.136 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.518 Initializing NVMe Controllers 00:29:14.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:14.518 Initialization complete. Launching workers. 00:29:14.518 ======================================================== 00:29:14.518 Latency(us) 00:29:14.518 Device Information : IOPS MiB/s Average min max 00:29:14.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.95 0.33 12102.22 266.81 45318.34 00:29:14.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.96 0.22 18723.31 7964.27 50880.38 00:29:14.518 ======================================================== 00:29:14.518 Total : 139.91 0.55 14750.65 266.81 50880.38 00:29:14.518 00:29:14.518 17:39:22 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.518 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.902 Initializing NVMe Controllers 00:29:15.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:15.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:15.902 Initialization complete. Launching workers. 00:29:15.902 ======================================================== 00:29:15.902 Latency(us) 00:29:15.902 Device Information : IOPS MiB/s Average min max 00:29:15.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10493.56 40.99 3061.64 363.72 10266.75 00:29:15.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3851.84 15.05 8347.62 5073.65 15538.79 00:29:15.902 ======================================================== 00:29:15.902 Total : 14345.40 56.04 4480.96 363.72 15538.79 00:29:15.902 00:29:15.902 17:39:24 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:15.902 17:39:24 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:15.902 17:39:24 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.902 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.446 Initializing NVMe Controllers 00:29:18.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.446 Controller IO queue size 128, less than required. 00:29:18.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.446 Controller IO queue size 128, less than required. 00:29:18.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:18.446 Initialization complete. Launching workers. 00:29:18.446 ======================================================== 00:29:18.446 Latency(us) 00:29:18.446 Device Information : IOPS MiB/s Average min max 00:29:18.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1583.98 396.00 82212.82 47147.88 140031.53 00:29:18.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.49 143.37 228493.19 91713.36 369881.34 00:29:18.446 ======================================================== 00:29:18.446 Total : 2157.48 539.37 121096.61 47147.88 369881.34 00:29:18.446 00:29:18.446 17:39:26 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:18.446 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.446 No valid NVMe controllers or AIO or URING devices found 00:29:18.446 Initializing NVMe Controllers 00:29:18.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.446 Controller IO queue size 128, less than required. 00:29:18.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.446 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:18.446 Controller IO queue size 128, less than required. 00:29:18.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.446 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:18.446 WARNING: Some requested NVMe devices were skipped 00:29:18.446 17:39:26 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:18.446 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.992 Initializing NVMe Controllers 00:29:20.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.992 Controller IO queue size 128, less than required. 00:29:20.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.992 Controller IO queue size 128, less than required. 00:29:20.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:20.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:20.993 Initialization complete. Launching workers. 00:29:20.993 00:29:20.993 ==================== 00:29:20.993 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:20.993 TCP transport: 00:29:20.993 polls: 21127 00:29:20.993 idle_polls: 12480 00:29:20.993 sock_completions: 8647 00:29:20.993 nvme_completions: 6249 00:29:20.993 submitted_requests: 9619 00:29:20.993 queued_requests: 1 00:29:20.993 00:29:20.993 ==================== 00:29:20.993 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:20.993 TCP transport: 00:29:20.993 polls: 21384 00:29:20.993 idle_polls: 12905 00:29:20.993 sock_completions: 8479 00:29:20.993 nvme_completions: 6374 00:29:20.993 submitted_requests: 9832 00:29:20.993 queued_requests: 1 00:29:20.993 ======================================================== 00:29:20.993 Latency(us) 00:29:20.993 Device Information : IOPS MiB/s Average min max 00:29:20.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1625.57 406.39 79607.14 48856.13 117640.76 00:29:20.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1656.57 414.14 78486.50 37522.06 122329.84 00:29:20.993 ======================================================== 00:29:20.993 Total : 3282.14 820.54 79041.53 37522.06 122329.84 00:29:20.993 00:29:20.993 17:39:29 -- host/perf.sh@66 -- # sync 00:29:20.993 17:39:29 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.253 17:39:29 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:21.253 17:39:29 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:29:21.253 17:39:29 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:22.197 17:39:30 -- host/perf.sh@72 -- # ls_guid=d7d95317-2cb5-483b-849b-54bef0ff07d3 00:29:22.197 17:39:30 -- host/perf.sh@73 -- # get_lvs_free_mb d7d95317-2cb5-483b-849b-54bef0ff07d3 00:29:22.197 17:39:30 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d7d95317-2cb5-483b-849b-54bef0ff07d3 00:29:22.197 17:39:30 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:22.197 17:39:30 -- common/autotest_common.sh@1345 -- # local fc 00:29:22.197 17:39:30 -- common/autotest_common.sh@1346 -- # local cs 00:29:22.197 17:39:30 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:22.458 17:39:30 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:22.458 { 00:29:22.458 "uuid": "d7d95317-2cb5-483b-849b-54bef0ff07d3", 00:29:22.458 "name": "lvs_0", 00:29:22.458 "base_bdev": "Nvme0n1", 00:29:22.458 "total_data_clusters": 457407, 00:29:22.458 "free_clusters": 457407, 00:29:22.458 "block_size": 512, 00:29:22.458 "cluster_size": 4194304 00:29:22.458 } 00:29:22.458 ]' 00:29:22.458 17:39:30 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d7d95317-2cb5-483b-849b-54bef0ff07d3") .free_clusters' 00:29:22.458 17:39:30 -- common/autotest_common.sh@1348 -- # fc=457407 00:29:22.458 17:39:30 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d7d95317-2cb5-483b-849b-54bef0ff07d3") .cluster_size' 00:29:22.458 17:39:30 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:22.458 17:39:30 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:29:22.458 17:39:30 -- common/autotest_common.sh@1353 -- # echo 1829628 00:29:22.458 1829628 00:29:22.458 17:39:30 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:29:22.458 17:39:30 -- host/perf.sh@78 -- # free_mb=20480 00:29:22.458 17:39:30 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d7d95317-2cb5-483b-849b-54bef0ff07d3 lbd_0 20480 00:29:22.718 17:39:31 -- host/perf.sh@80 -- # lb_guid=381b8b9d-f2a8-49a4-aa45-82eef732ed53 00:29:22.718 17:39:31 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 381b8b9d-f2a8-49a4-aa45-82eef732ed53 lvs_n_0 00:29:24.632 17:39:32 -- host/perf.sh@83 -- # ls_nested_guid=bbe52344-ddf8-4046-b2e2-6800a202509a 00:29:24.632 17:39:32 -- host/perf.sh@84 -- # get_lvs_free_mb bbe52344-ddf8-4046-b2e2-6800a202509a 00:29:24.632 17:39:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=bbe52344-ddf8-4046-b2e2-6800a202509a 00:29:24.632 17:39:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:24.632 17:39:32 -- common/autotest_common.sh@1345 -- # local fc 00:29:24.632 17:39:32 -- common/autotest_common.sh@1346 -- # local cs 00:29:24.632 17:39:32 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:24.632 17:39:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:24.632 { 00:29:24.632 "uuid": "d7d95317-2cb5-483b-849b-54bef0ff07d3", 00:29:24.632 "name": "lvs_0", 00:29:24.632 "base_bdev": "Nvme0n1", 00:29:24.632 "total_data_clusters": 457407, 00:29:24.632 "free_clusters": 452287, 00:29:24.632 "block_size": 512, 00:29:24.632 "cluster_size": 4194304 00:29:24.632 }, 00:29:24.632 { 00:29:24.632 "uuid": "bbe52344-ddf8-4046-b2e2-6800a202509a", 00:29:24.632 "name": "lvs_n_0", 00:29:24.632 "base_bdev": "381b8b9d-f2a8-49a4-aa45-82eef732ed53", 00:29:24.632 "total_data_clusters": 5114, 00:29:24.632 "free_clusters": 5114, 00:29:24.632 "block_size": 512, 00:29:24.632 "cluster_size": 4194304 00:29:24.632 } 00:29:24.632 ]' 00:29:24.632 17:39:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="bbe52344-ddf8-4046-b2e2-6800a202509a") .free_clusters' 00:29:24.632 17:39:32 -- common/autotest_common.sh@1348 -- # fc=5114 00:29:24.632 17:39:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="bbe52344-ddf8-4046-b2e2-6800a202509a") .cluster_size' 00:29:24.632 17:39:33 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:24.632 17:39:33 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:29:24.632 17:39:33 -- common/autotest_common.sh@1353 -- # echo 20456 00:29:24.632 20456 00:29:24.632 17:39:33 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:24.632 17:39:33 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bbe52344-ddf8-4046-b2e2-6800a202509a lbd_nest_0 20456 00:29:24.893 17:39:33 -- host/perf.sh@88 -- # lb_nested_guid=f99ffda0-6db6-4c54-a273-c159a30326d8 00:29:24.893 17:39:33 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.893 17:39:33 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:24.893 17:39:33 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f99ffda0-6db6-4c54-a273-c159a30326d8 00:29:25.154 17:39:33 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.414 17:39:33 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:25.415 17:39:33 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:25.415 17:39:33 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:25.415 17:39:33 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:25.415 17:39:33 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.415 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.645 Initializing NVMe Controllers 00:29:37.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.645 Initialization complete. Launching workers. 00:29:37.645 ======================================================== 00:29:37.645 Latency(us) 00:29:37.645 Device Information : IOPS MiB/s Average min max 00:29:37.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.60 0.02 21090.99 112.10 45765.15 00:29:37.645 ======================================================== 00:29:37.645 Total : 47.60 0.02 21090.99 112.10 45765.15 00:29:37.645 00:29:37.645 17:39:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:37.645 17:39:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.647 Initializing NVMe Controllers 00:29:47.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:47.647 Initialization complete. Launching workers. 00:29:47.647 ======================================================== 00:29:47.647 Latency(us) 00:29:47.647 Device Information : IOPS MiB/s Average min max 00:29:47.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.79 7.85 15938.35 7971.35 51878.02 00:29:47.647 ======================================================== 00:29:47.647 Total : 62.79 7.85 15938.35 7971.35 51878.02 00:29:47.647 00:29:47.647 17:39:54 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:47.647 17:39:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:47.647 17:39:54 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.647 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.643 Initializing NVMe Controllers 00:29:57.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.643 Initialization complete. Launching workers. 00:29:57.643 ======================================================== 00:29:57.643 Latency(us) 00:29:57.643 Device Information : IOPS MiB/s Average min max 00:29:57.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8680.60 4.24 3687.78 442.99 9852.49 00:29:57.643 ======================================================== 00:29:57.643 Total : 8680.60 4.24 3687.78 442.99 9852.49 00:29:57.643 00:29:57.643 17:40:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:57.643 17:40:04 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.643 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.841 Initializing NVMe Controllers 00:30:07.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:07.841 Initialization complete. Launching workers. 00:30:07.841 ======================================================== 00:30:07.841 Latency(us) 00:30:07.841 Device Information : IOPS MiB/s Average min max 00:30:07.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3768.28 471.04 8496.03 612.60 22943.72 00:30:07.841 ======================================================== 00:30:07.841 Total : 3768.28 471.04 8496.03 612.60 22943.72 00:30:07.841 00:30:07.841 17:40:15 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:07.841 17:40:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:07.841 17:40:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.841 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.850 Initializing NVMe Controllers 00:30:17.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.850 Controller IO queue size 128, less than required. 00:30:17.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.850 Initialization complete. Launching workers. 00:30:17.850 ======================================================== 00:30:17.850 Latency(us) 00:30:17.850 Device Information : IOPS MiB/s Average min max 00:30:17.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15900.80 7.76 8051.91 1928.23 16768.37 00:30:17.850 ======================================================== 00:30:17.850 Total : 15900.80 7.76 8051.91 1928.23 16768.37 00:30:17.850 00:30:17.850 17:40:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:17.850 17:40:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:17.850 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.850 Initializing NVMe Controllers 00:30:27.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.850 Controller IO queue size 128, less than required. 00:30:27.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.850 Initialization complete. Launching workers. 00:30:27.850 ======================================================== 00:30:27.850 Latency(us) 00:30:27.850 Device Information : IOPS MiB/s Average min max 00:30:27.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1173.20 146.65 109595.06 15681.54 234484.21 00:30:27.850 ======================================================== 00:30:27.850 Total : 1173.20 146.65 109595.06 15681.54 234484.21 00:30:27.850 00:30:27.850 17:40:35 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.850 17:40:36 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f99ffda0-6db6-4c54-a273-c159a30326d8 00:30:29.762 17:40:37 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:29.762 17:40:37 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 381b8b9d-f2a8-49a4-aa45-82eef732ed53 00:30:29.762 17:40:38 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:30.022 17:40:38 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:30.022 17:40:38 -- host/perf.sh@114 -- # nvmftestfini 00:30:30.022 17:40:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:30.022 17:40:38 -- nvmf/common.sh@116 -- # sync 00:30:30.022 17:40:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:30.022 17:40:38 -- nvmf/common.sh@119 -- # set +e 00:30:30.022 17:40:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:30.022 17:40:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:30.022 rmmod nvme_tcp 00:30:30.022 rmmod nvme_fabrics 00:30:30.022 rmmod nvme_keyring 00:30:30.022 17:40:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:30.022 17:40:38 -- nvmf/common.sh@123 -- # set -e 00:30:30.022 17:40:38 -- nvmf/common.sh@124 -- # return 0 00:30:30.022 17:40:38 -- nvmf/common.sh@477 -- # '[' -n 3349507 ']' 00:30:30.022 17:40:38 -- nvmf/common.sh@478 -- # killprocess 3349507 00:30:30.022 17:40:38 -- common/autotest_common.sh@926 -- # '[' -z 3349507 ']' 00:30:30.022 17:40:38 -- common/autotest_common.sh@930 -- # kill -0 3349507 00:30:30.022 17:40:38 -- common/autotest_common.sh@931 -- # uname 00:30:30.022 17:40:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.022 17:40:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3349507 00:30:30.022 17:40:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:30.022 17:40:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:30.022 17:40:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3349507' 00:30:30.022 killing process with pid 3349507 00:30:30.022 17:40:38 -- common/autotest_common.sh@945 -- # kill 3349507 00:30:30.022 17:40:38 -- common/autotest_common.sh@950 -- # wait 3349507 00:30:31.935 17:40:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:31.935 17:40:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:31.935 17:40:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:31.935 17:40:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.935 17:40:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:31.935 17:40:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.935 17:40:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.935 17:40:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.483 17:40:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:34.483 00:30:34.483 real 1m32.833s 00:30:34.483 user 5m27.642s 00:30:34.483 sys 0m15.533s 00:30:34.483 17:40:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.483 17:40:42 -- common/autotest_common.sh@10 -- # set +x 00:30:34.483 ************************************ 00:30:34.483 END TEST nvmf_perf 00:30:34.483 ************************************ 00:30:34.483 17:40:42 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:34.483 17:40:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:34.483 17:40:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:34.483 17:40:42 -- common/autotest_common.sh@10 -- # set +x 00:30:34.483 ************************************ 00:30:34.483 START TEST nvmf_fio_host 00:30:34.483 ************************************ 00:30:34.483 17:40:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:34.483 * Looking for test storage... 00:30:34.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.483 17:40:42 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.483 17:40:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.483 17:40:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.483 17:40:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.483 17:40:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.483 17:40:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.483 17:40:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.483 17:40:42 -- paths/export.sh@5 -- # export PATH 00:30:34.483 17:40:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.483 17:40:42 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.483 17:40:42 -- nvmf/common.sh@7 -- # uname -s 00:30:34.483 17:40:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.483 17:40:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.483 17:40:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.483 17:40:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.483 17:40:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.483 17:40:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.483 17:40:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.483 17:40:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.483 17:40:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.483 17:40:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.483 17:40:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:34.483 17:40:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:34.484 17:40:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.484 17:40:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.484 17:40:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.484 17:40:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.484 17:40:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.484 17:40:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.484 17:40:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.484 17:40:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.484 17:40:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.484 17:40:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.484 17:40:42 -- paths/export.sh@5 -- # export PATH 00:30:34.484 17:40:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.484 17:40:42 -- nvmf/common.sh@46 -- # : 0 00:30:34.484 17:40:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:34.484 17:40:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:34.484 17:40:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:34.484 17:40:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.484 17:40:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.484 17:40:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:34.484 17:40:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:34.484 17:40:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:34.484 17:40:42 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:34.484 17:40:42 -- host/fio.sh@14 -- # nvmftestinit 00:30:34.484 17:40:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:34.484 17:40:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.484 17:40:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:34.484 17:40:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:34.484 17:40:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:34.484 17:40:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.484 17:40:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.484 17:40:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.484 17:40:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:34.484 17:40:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:34.484 17:40:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:34.484 17:40:42 -- common/autotest_common.sh@10 -- # set +x 00:30:42.626 17:40:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:42.626 17:40:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:42.626 17:40:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:42.626 17:40:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:42.626 17:40:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:42.626 17:40:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:42.626 17:40:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:42.626 17:40:49 -- nvmf/common.sh@294 -- # net_devs=() 00:30:42.626 17:40:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:42.626 17:40:49 -- nvmf/common.sh@295 -- # e810=() 00:30:42.626 17:40:49 -- nvmf/common.sh@295 -- # local -ga e810 00:30:42.626 17:40:49 -- nvmf/common.sh@296 -- # x722=() 00:30:42.626 17:40:49 -- nvmf/common.sh@296 -- # local -ga x722 00:30:42.626 17:40:49 -- nvmf/common.sh@297 -- # mlx=() 00:30:42.626 17:40:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:42.626 17:40:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.626 17:40:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:42.626 17:40:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:42.626 17:40:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:42.626 17:40:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:42.626 17:40:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:42.626 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:42.626 17:40:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:42.626 17:40:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:42.626 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:42.626 17:40:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:42.626 17:40:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:42.626 17:40:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.626 17:40:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:42.626 17:40:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.626 17:40:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:42.626 Found net devices under 0000:31:00.0: cvl_0_0 00:30:42.626 17:40:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.626 17:40:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:42.626 17:40:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.626 17:40:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:42.626 17:40:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.626 17:40:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:42.626 Found net devices under 0000:31:00.1: cvl_0_1 00:30:42.626 17:40:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.626 17:40:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:42.626 17:40:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:42.626 17:40:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:42.626 17:40:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:42.626 17:40:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.626 17:40:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.626 17:40:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.626 17:40:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:42.626 17:40:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.626 17:40:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.626 17:40:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:42.626 17:40:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.626 17:40:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.626 17:40:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:42.626 17:40:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:42.626 17:40:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.626 17:40:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.626 17:40:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.626 17:40:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.626 17:40:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:42.626 17:40:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.626 17:40:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.626 17:40:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.626 17:40:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:42.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:30:42.626 00:30:42.626 --- 10.0.0.2 ping statistics --- 00:30:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.626 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:30:42.626 17:40:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:42.626 00:30:42.626 --- 10.0.0.1 ping statistics --- 00:30:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.626 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:42.626 17:40:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.626 17:40:50 -- nvmf/common.sh@410 -- # return 0 00:30:42.626 17:40:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:42.626 17:40:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.626 17:40:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:42.626 17:40:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:42.626 17:40:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.626 17:40:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:42.626 17:40:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:42.626 17:40:50 -- host/fio.sh@16 -- # [[ y != y ]] 00:30:42.626 17:40:50 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:42.626 17:40:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:42.626 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:30:42.626 17:40:50 -- host/fio.sh@24 -- # nvmfpid=3369516 00:30:42.626 17:40:50 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.626 17:40:50 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.626 17:40:50 -- host/fio.sh@28 -- # waitforlisten 3369516 00:30:42.626 17:40:50 -- common/autotest_common.sh@819 -- # '[' -z 3369516 ']' 00:30:42.626 17:40:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.626 17:40:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:42.626 17:40:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.626 17:40:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:42.626 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:30:42.626 [2024-10-13 17:40:50.134023] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:42.626 [2024-10-13 17:40:50.134111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.626 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.626 [2024-10-13 17:40:50.210575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.626 [2024-10-13 17:40:50.248950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:42.626 [2024-10-13 17:40:50.249114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.626 [2024-10-13 17:40:50.249126] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.626 [2024-10-13 17:40:50.249135] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.626 [2024-10-13 17:40:50.249223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.626 [2024-10-13 17:40:50.249360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.626 [2024-10-13 17:40:50.249494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.626 [2024-10-13 17:40:50.249494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.626 17:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:42.626 17:40:50 -- common/autotest_common.sh@852 -- # return 0 00:30:42.626 17:40:50 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:42.627 [2024-10-13 17:40:51.079783] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.627 17:40:51 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:42.627 17:40:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:42.627 17:40:51 -- common/autotest_common.sh@10 -- # set +x 00:30:42.888 17:40:51 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:42.888 Malloc1 00:30:42.888 17:40:51 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.149 17:40:51 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:43.149 17:40:51 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.410 [2024-10-13 17:40:51.793502] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.410 17:40:51 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.671 17:40:51 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:43.671 17:40:51 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.671 17:40:51 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.671 17:40:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:43.671 17:40:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.671 17:40:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:43.671 17:40:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.671 17:40:51 -- common/autotest_common.sh@1320 -- # shift 00:30:43.671 17:40:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:43.671 17:40:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.671 17:40:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.671 17:40:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:43.671 17:40:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:43.671 17:40:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:43.671 17:40:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:43.671 17:40:52 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.671 17:40:52 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.671 17:40:52 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:43.671 17:40:52 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:43.671 17:40:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:43.671 17:40:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:43.671 17:40:52 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:43.671 17:40:52 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.932 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:43.932 fio-3.35 00:30:43.932 Starting 1 thread 00:30:43.932 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.491 00:30:46.491 test: (groupid=0, jobs=1): err= 0: pid=3370178: Sun Oct 13 17:40:54 2024 00:30:46.491 read: IOPS=14.3k, BW=55.7MiB/s (58.4MB/s)(112MiB/2004msec) 00:30:46.491 slat (usec): min=2, max=282, avg= 2.17, stdev= 2.32 00:30:46.491 clat (usec): min=3563, max=9263, avg=4940.31, stdev=716.08 00:30:46.491 lat (usec): min=3565, max=9276, avg=4942.47, stdev=716.23 00:30:46.491 clat percentiles (usec): 00:30:46.491 | 1.00th=[ 4015], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:30:46.491 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:30:46.491 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5604], 95.00th=[ 6783], 00:30:46.491 | 99.00th=[ 7439], 99.50th=[ 7701], 99.90th=[ 8356], 99.95th=[ 8586], 00:30:46.491 | 99.99th=[ 9241] 00:30:46.491 bw ( KiB/s): min=48552, max=60096, per=99.95%, avg=57030.00, stdev=5655.77, samples=4 00:30:46.491 iops : min=12138, max=15024, avg=14257.50, stdev=1413.94, samples=4 00:30:46.491 write: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(112MiB/2004msec); 0 zone resets 00:30:46.491 slat (usec): min=2, max=260, avg= 2.23, stdev= 1.70 00:30:46.491 clat (usec): min=2791, max=7965, avg=3989.57, stdev=588.28 00:30:46.491 lat (usec): min=2793, max=7972, avg=3991.81, stdev=588.46 00:30:46.491 clat percentiles (usec): 00:30:46.491 | 1.00th=[ 3195], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:30:46.491 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3916], 00:30:46.491 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4621], 95.00th=[ 5538], 00:30:46.491 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 7177], 99.95th=[ 7373], 00:30:46.491 | 99.99th=[ 7635] 00:30:46.491 bw ( KiB/s): min=49232, max=60072, per=100.00%, avg=57172.00, stdev=5298.23, samples=4 00:30:46.491 iops : min=12308, max=15018, avg=14293.00, stdev=1324.56, samples=4 00:30:46.491 lat (msec) : 4=34.84%, 10=65.16% 00:30:46.491 cpu : usr=73.44%, sys=25.16%, ctx=40, majf=0, minf=15 00:30:46.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:46.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:46.491 issued rwts: total=28585,28643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:46.491 00:30:46.491 Run status group 0 (all jobs): 00:30:46.491 READ: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=112MiB (117MB), run=2004-2004msec 00:30:46.491 WRITE: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=112MiB (117MB), run=2004-2004msec 00:30:46.491 17:40:54 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.491 17:40:54 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.491 17:40:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:46.491 17:40:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.491 17:40:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:46.491 17:40:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.491 17:40:54 -- common/autotest_common.sh@1320 -- # shift 00:30:46.491 17:40:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:46.491 17:40:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:46.491 17:40:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:46.491 17:40:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:46.491 17:40:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:46.491 17:40:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:46.491 17:40:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:46.491 17:40:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.751 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:46.751 fio-3.35 00:30:46.751 Starting 1 thread 00:30:46.751 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.295 00:30:49.295 test: (groupid=0, jobs=1): err= 0: pid=3370844: Sun Oct 13 17:40:57 2024 00:30:49.295 read: IOPS=9678, BW=151MiB/s (159MB/s)(304MiB/2008msec) 00:30:49.295 slat (usec): min=3, max=111, avg= 3.69, stdev= 1.76 00:30:49.295 clat (usec): min=1577, max=14804, avg=7871.35, stdev=1955.83 00:30:49.295 lat (usec): min=1580, max=14826, avg=7875.04, stdev=1956.02 00:30:49.295 clat percentiles (usec): 00:30:49.295 | 1.00th=[ 3884], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6063], 00:30:49.295 | 30.00th=[ 6652], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8356], 00:30:49.295 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11076], 00:30:49.295 | 99.00th=[12387], 99.50th=[13173], 99.90th=[13960], 99.95th=[14222], 00:30:49.295 | 99.99th=[14746] 00:30:49.295 bw ( KiB/s): min=69472, max=85536, per=49.52%, avg=76680.00, stdev=7260.05, samples=4 00:30:49.295 iops : min= 4342, max= 5346, avg=4792.50, stdev=453.75, samples=4 00:30:49.295 write: IOPS=5736, BW=89.6MiB/s (94.0MB/s)(156MiB/1744msec); 0 zone resets 00:30:49.295 slat (usec): min=39, max=339, avg=41.04, stdev= 7.11 00:30:49.295 clat (usec): min=2327, max=15893, avg=9110.14, stdev=1535.25 00:30:49.295 lat (usec): min=2367, max=16030, avg=9151.18, stdev=1536.85 00:30:49.295 clat percentiles (usec): 00:30:49.295 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7898], 00:30:49.295 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:30:49.295 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11076], 95.00th=[11863], 00:30:49.295 | 99.00th=[13698], 99.50th=[14222], 99.90th=[15270], 99.95th=[15664], 00:30:49.295 | 99.99th=[15795] 00:30:49.296 bw ( KiB/s): min=71840, max=89088, per=87.03%, avg=79880.00, stdev=7769.50, samples=4 00:30:49.296 iops : min= 4490, max= 5568, avg=4992.50, stdev=485.59, samples=4 00:30:49.296 lat (msec) : 2=0.02%, 4=0.86%, 10=80.70%, 20=18.42% 00:30:49.296 cpu : usr=90.58%, sys=8.67%, ctx=15, majf=0, minf=29 00:30:49.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:30:49.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:49.296 issued rwts: total=19435,10004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:49.296 00:30:49.296 Run status group 0 (all jobs): 00:30:49.296 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=304MiB (318MB), run=2008-2008msec 00:30:49.296 WRITE: bw=89.6MiB/s (94.0MB/s), 89.6MiB/s-89.6MiB/s (94.0MB/s-94.0MB/s), io=156MiB (164MB), run=1744-1744msec 00:30:49.296 17:40:57 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.296 17:40:57 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:49.296 17:40:57 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:49.296 17:40:57 -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:49.296 17:40:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:49.296 17:40:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:49.296 17:40:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:49.296 17:40:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:49.296 17:40:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:49.296 17:40:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:49.296 17:40:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:30:49.296 17:40:57 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:30:49.866 Nvme0n1 00:30:49.866 17:40:58 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:50.438 17:40:58 -- host/fio.sh@53 -- # ls_guid=c270a679-aa2d-499e-ba2c-2fcb94a91a95 00:30:50.438 17:40:58 -- host/fio.sh@54 -- # get_lvs_free_mb c270a679-aa2d-499e-ba2c-2fcb94a91a95 00:30:50.438 17:40:58 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c270a679-aa2d-499e-ba2c-2fcb94a91a95 00:30:50.438 17:40:58 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:50.438 17:40:58 -- common/autotest_common.sh@1345 -- # local fc 00:30:50.438 17:40:58 -- common/autotest_common.sh@1346 -- # local cs 00:30:50.438 17:40:58 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:50.438 17:40:58 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:50.438 { 00:30:50.438 "uuid": "c270a679-aa2d-499e-ba2c-2fcb94a91a95", 00:30:50.438 "name": "lvs_0", 00:30:50.438 "base_bdev": "Nvme0n1", 00:30:50.438 "total_data_clusters": 1787, 00:30:50.438 "free_clusters": 1787, 00:30:50.438 "block_size": 512, 00:30:50.438 "cluster_size": 1073741824 00:30:50.438 } 00:30:50.438 ]' 00:30:50.438 17:40:58 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c270a679-aa2d-499e-ba2c-2fcb94a91a95") .free_clusters' 00:30:50.438 17:40:58 -- common/autotest_common.sh@1348 -- # fc=1787 00:30:50.700 17:40:58 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c270a679-aa2d-499e-ba2c-2fcb94a91a95") .cluster_size' 00:30:50.700 17:40:59 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:50.700 17:40:59 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:30:50.700 17:40:59 -- common/autotest_common.sh@1353 -- # echo 1829888 00:30:50.700 1829888 00:30:50.700 17:40:59 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:30:50.700 5e442492-cfd4-4910-b48b-d01a041b9033 00:30:50.700 17:40:59 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:50.961 17:40:59 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:51.222 17:40:59 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:51.222 17:40:59 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:51.222 17:40:59 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:51.222 17:40:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:51.222 17:40:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:51.222 17:40:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:51.222 17:40:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:51.222 17:40:59 -- common/autotest_common.sh@1320 -- # shift 00:30:51.222 17:40:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:51.222 17:40:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:51.222 17:40:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:51.222 17:40:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:51.222 17:40:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:51.222 17:40:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:51.222 17:40:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:51.222 17:40:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:51.808 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:51.808 fio-3.35 00:30:51.808 Starting 1 thread 00:30:51.808 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.352 00:30:54.352 test: (groupid=0, jobs=1): err= 0: pid=3371914: Sun Oct 13 17:41:02 2024 00:30:54.352 read: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(86.3MiB/2005msec) 00:30:54.352 slat (usec): min=2, max=120, avg= 2.32, stdev= 1.20 00:30:54.352 clat (usec): min=2335, max=10349, avg=6406.92, stdev=547.26 00:30:54.352 lat (usec): min=2340, max=10351, avg=6409.24, stdev=547.27 00:30:54.352 clat percentiles (usec): 00:30:54.352 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:30:54.352 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:30:54.352 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:30:54.352 | 99.00th=[ 7767], 99.50th=[ 8848], 99.90th=[ 9503], 99.95th=[ 9634], 00:30:54.352 | 99.99th=[10290] 00:30:54.352 bw ( KiB/s): min=42634, max=44640, per=99.86%, avg=43998.50, stdev=924.40, samples=4 00:30:54.352 iops : min=10658, max=11160, avg=10999.50, stdev=231.35, samples=4 00:30:54.352 write: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(86.0MiB/2005msec); 0 zone resets 00:30:54.352 slat (nsec): min=2091, max=107729, avg=2378.93, stdev=869.12 00:30:54.352 clat (usec): min=1142, max=9193, avg=5129.93, stdev=480.12 00:30:54.352 lat (usec): min=1149, max=9196, avg=5132.30, stdev=480.22 00:30:54.352 clat percentiles (usec): 00:30:54.352 | 1.00th=[ 4080], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 4752], 00:30:54.352 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:30:54.352 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5800], 00:30:54.352 | 99.00th=[ 6259], 99.50th=[ 7504], 99.90th=[ 8291], 99.95th=[ 8455], 00:30:54.352 | 99.99th=[ 9110] 00:30:54.352 bw ( KiB/s): min=43001, max=44672, per=99.96%, avg=43902.25, stdev=688.38, samples=4 00:30:54.352 iops : min=10750, max=11168, avg=10975.50, stdev=172.20, samples=4 00:30:54.352 lat (msec) : 2=0.02%, 4=0.40%, 10=99.57%, 20=0.01% 00:30:54.352 cpu : usr=75.40%, sys=23.80%, ctx=54, majf=0, minf=15 00:30:54.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:54.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:54.352 issued rwts: total=22085,22014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:54.352 00:30:54.352 Run status group 0 (all jobs): 00:30:54.352 READ: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=86.3MiB (90.5MB), run=2005-2005msec 00:30:54.352 WRITE: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=86.0MiB (90.2MB), run=2005-2005msec 00:30:54.352 17:41:02 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:54.352 17:41:02 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:54.924 17:41:03 -- host/fio.sh@64 -- # ls_nested_guid=e73fc96c-8651-4082-8f76-24a9980cdc2e 00:30:54.924 17:41:03 -- host/fio.sh@65 -- # get_lvs_free_mb e73fc96c-8651-4082-8f76-24a9980cdc2e 00:30:54.924 17:41:03 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e73fc96c-8651-4082-8f76-24a9980cdc2e 00:30:54.924 17:41:03 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:54.924 17:41:03 -- common/autotest_common.sh@1345 -- # local fc 00:30:54.924 17:41:03 -- common/autotest_common.sh@1346 -- # local cs 00:30:54.924 17:41:03 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:55.184 17:41:03 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:55.184 { 00:30:55.184 "uuid": "c270a679-aa2d-499e-ba2c-2fcb94a91a95", 00:30:55.184 "name": "lvs_0", 00:30:55.184 "base_bdev": "Nvme0n1", 00:30:55.184 "total_data_clusters": 1787, 00:30:55.184 "free_clusters": 0, 00:30:55.184 "block_size": 512, 00:30:55.184 "cluster_size": 1073741824 00:30:55.184 }, 00:30:55.184 { 00:30:55.184 "uuid": "e73fc96c-8651-4082-8f76-24a9980cdc2e", 00:30:55.184 "name": "lvs_n_0", 00:30:55.184 "base_bdev": "5e442492-cfd4-4910-b48b-d01a041b9033", 00:30:55.184 "total_data_clusters": 457025, 00:30:55.184 "free_clusters": 457025, 00:30:55.184 "block_size": 512, 00:30:55.184 "cluster_size": 4194304 00:30:55.184 } 00:30:55.184 ]' 00:30:55.184 17:41:03 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e73fc96c-8651-4082-8f76-24a9980cdc2e") .free_clusters' 00:30:55.184 17:41:03 -- common/autotest_common.sh@1348 -- # fc=457025 00:30:55.184 17:41:03 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e73fc96c-8651-4082-8f76-24a9980cdc2e") .cluster_size' 00:30:55.184 17:41:03 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:55.184 17:41:03 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:30:55.184 17:41:03 -- common/autotest_common.sh@1353 -- # echo 1828100 00:30:55.184 1828100 00:30:55.184 17:41:03 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:30:56.567 15c938a5-e31b-448b-909b-64f8ccb771a1 00:30:56.567 17:41:04 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:56.567 17:41:04 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:56.567 17:41:05 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:56.827 17:41:05 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:56.827 17:41:05 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:56.827 17:41:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:56.827 17:41:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:56.827 17:41:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:56.827 17:41:05 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:56.827 17:41:05 -- common/autotest_common.sh@1320 -- # shift 00:30:56.827 17:41:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:56.827 17:41:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:56.827 17:41:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:56.827 17:41:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:56.827 17:41:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:56.827 17:41:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:56.827 17:41:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:56.827 17:41:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:57.087 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:57.087 fio-3.35 00:30:57.087 Starting 1 thread 00:30:57.347 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.889 00:30:59.889 test: (groupid=0, jobs=1): err= 0: pid=3373179: Sun Oct 13 17:41:07 2024 00:30:59.889 read: IOPS=9444, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2005msec) 00:30:59.889 slat (usec): min=2, max=109, avg= 2.22, stdev= 1.05 00:30:59.889 clat (usec): min=2112, max=12108, avg=7477.83, stdev=569.15 00:30:59.889 lat (usec): min=2129, max=12110, avg=7480.05, stdev=569.10 00:30:59.889 clat percentiles (usec): 00:30:59.889 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:30:59.889 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:30:59.889 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8356], 00:30:59.889 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[ 9372], 99.95th=[11207], 00:30:59.889 | 99.99th=[12125] 00:30:59.889 bw ( KiB/s): min=36232, max=38488, per=99.90%, avg=37742.00, stdev=1022.69, samples=4 00:30:59.889 iops : min= 9058, max= 9622, avg=9435.50, stdev=255.67, samples=4 00:30:59.889 write: IOPS=9444, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2005msec); 0 zone resets 00:30:59.889 slat (nsec): min=2105, max=97189, avg=2284.65, stdev=748.42 00:30:59.889 clat (usec): min=1007, max=11150, avg=5968.85, stdev=501.34 00:30:59.889 lat (usec): min=1013, max=11152, avg=5971.13, stdev=501.32 00:30:59.889 clat percentiles (usec): 00:30:59.889 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:30:59.889 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:30:59.889 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 6718], 00:30:59.889 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 9110], 99.95th=[10028], 00:30:59.889 | 99.99th=[11076] 00:30:59.889 bw ( KiB/s): min=37208, max=38096, per=99.93%, avg=37750.00, stdev=418.73, samples=4 00:30:59.889 iops : min= 9302, max= 9524, avg=9437.50, stdev=104.68, samples=4 00:30:59.889 lat (msec) : 2=0.01%, 4=0.10%, 10=99.82%, 20=0.07% 00:30:59.889 cpu : usr=72.70%, sys=26.30%, ctx=37, majf=0, minf=15 00:30:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:59.889 issued rwts: total=18937,18936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:59.889 00:30:59.889 Run status group 0 (all jobs): 00:30:59.889 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2005-2005msec 00:30:59.889 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2005-2005msec 00:30:59.889 17:41:07 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:59.889 17:41:08 -- host/fio.sh@74 -- # sync 00:30:59.889 17:41:08 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:01.799 17:41:10 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:02.060 17:41:10 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:02.631 17:41:10 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:02.631 17:41:11 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:05.175 17:41:13 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:05.175 17:41:13 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:05.175 17:41:13 -- host/fio.sh@86 -- # nvmftestfini 00:31:05.175 17:41:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:05.175 17:41:13 -- nvmf/common.sh@116 -- # sync 00:31:05.175 17:41:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:05.175 17:41:13 -- nvmf/common.sh@119 -- # set +e 00:31:05.175 17:41:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:05.175 17:41:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:05.175 rmmod nvme_tcp 00:31:05.175 rmmod nvme_fabrics 00:31:05.175 rmmod nvme_keyring 00:31:05.175 17:41:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:05.175 17:41:13 -- nvmf/common.sh@123 -- # set -e 00:31:05.175 17:41:13 -- nvmf/common.sh@124 -- # return 0 00:31:05.175 17:41:13 -- nvmf/common.sh@477 -- # '[' -n 3369516 ']' 00:31:05.175 17:41:13 -- nvmf/common.sh@478 -- # killprocess 3369516 00:31:05.175 17:41:13 -- common/autotest_common.sh@926 -- # '[' -z 3369516 ']' 00:31:05.175 17:41:13 -- common/autotest_common.sh@930 -- # kill -0 3369516 00:31:05.175 17:41:13 -- common/autotest_common.sh@931 -- # uname 00:31:05.175 17:41:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:05.175 17:41:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3369516 00:31:05.175 17:41:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:05.175 17:41:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:05.175 17:41:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3369516' 00:31:05.175 killing process with pid 3369516 00:31:05.175 17:41:13 -- common/autotest_common.sh@945 -- # kill 3369516 00:31:05.175 17:41:13 -- common/autotest_common.sh@950 -- # wait 3369516 00:31:05.175 17:41:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:05.175 17:41:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:05.175 17:41:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:05.175 17:41:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:05.175 17:41:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:05.175 17:41:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.175 17:41:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.175 17:41:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.090 17:41:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:07.090 00:31:07.090 real 0m32.850s 00:31:07.090 user 2m38.303s 00:31:07.090 sys 0m9.714s 00:31:07.090 17:41:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.090 17:41:15 -- common/autotest_common.sh@10 -- # set +x 00:31:07.090 ************************************ 00:31:07.090 END TEST nvmf_fio_host 00:31:07.090 ************************************ 00:31:07.090 17:41:15 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:07.090 17:41:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:07.090 17:41:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:07.090 17:41:15 -- common/autotest_common.sh@10 -- # set +x 00:31:07.090 ************************************ 00:31:07.090 START TEST nvmf_failover 00:31:07.090 ************************************ 00:31:07.091 17:41:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:07.091 * Looking for test storage... 00:31:07.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:07.091 17:41:15 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.091 17:41:15 -- nvmf/common.sh@7 -- # uname -s 00:31:07.091 17:41:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.091 17:41:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.091 17:41:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.091 17:41:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.091 17:41:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.091 17:41:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.091 17:41:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.091 17:41:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.091 17:41:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.091 17:41:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.091 17:41:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:07.091 17:41:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:07.091 17:41:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.091 17:41:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.091 17:41:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.091 17:41:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.091 17:41:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.091 17:41:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.091 17:41:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.091 17:41:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.091 17:41:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.091 17:41:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.091 17:41:15 -- paths/export.sh@5 -- # export PATH 00:31:07.091 17:41:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.091 17:41:15 -- nvmf/common.sh@46 -- # : 0 00:31:07.091 17:41:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:07.091 17:41:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:07.091 17:41:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:07.091 17:41:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.091 17:41:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.091 17:41:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:07.091 17:41:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:07.091 17:41:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:07.091 17:41:15 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:07.091 17:41:15 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:07.091 17:41:15 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:07.091 17:41:15 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:07.091 17:41:15 -- host/failover.sh@18 -- # nvmftestinit 00:31:07.091 17:41:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:07.091 17:41:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.091 17:41:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:07.091 17:41:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:07.091 17:41:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:07.091 17:41:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.091 17:41:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.091 17:41:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.091 17:41:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:07.091 17:41:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:07.091 17:41:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:07.091 17:41:15 -- common/autotest_common.sh@10 -- # set +x 00:31:15.239 17:41:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:15.240 17:41:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:15.240 17:41:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:15.240 17:41:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:15.240 17:41:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:15.240 17:41:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:15.240 17:41:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:15.240 17:41:22 -- nvmf/common.sh@294 -- # net_devs=() 00:31:15.240 17:41:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:15.240 17:41:22 -- nvmf/common.sh@295 -- # e810=() 00:31:15.240 17:41:22 -- nvmf/common.sh@295 -- # local -ga e810 00:31:15.240 17:41:22 -- nvmf/common.sh@296 -- # x722=() 00:31:15.240 17:41:22 -- nvmf/common.sh@296 -- # local -ga x722 00:31:15.240 17:41:22 -- nvmf/common.sh@297 -- # mlx=() 00:31:15.240 17:41:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:15.240 17:41:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.240 17:41:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:15.240 17:41:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:15.240 17:41:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:15.240 17:41:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:15.240 17:41:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:15.240 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:15.240 17:41:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:15.240 17:41:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:15.240 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:15.240 17:41:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:15.240 17:41:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:15.240 17:41:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.240 17:41:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:15.240 17:41:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.240 17:41:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:15.240 Found net devices under 0000:31:00.0: cvl_0_0 00:31:15.240 17:41:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.240 17:41:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:15.240 17:41:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.240 17:41:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:15.240 17:41:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.240 17:41:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:15.240 Found net devices under 0000:31:00.1: cvl_0_1 00:31:15.240 17:41:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.240 17:41:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:15.240 17:41:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:15.240 17:41:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:15.240 17:41:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:15.240 17:41:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.240 17:41:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.240 17:41:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.240 17:41:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:15.240 17:41:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.240 17:41:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.240 17:41:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:15.240 17:41:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.240 17:41:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.240 17:41:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:15.240 17:41:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:15.240 17:41:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.240 17:41:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.240 17:41:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.240 17:41:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.240 17:41:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:15.240 17:41:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.240 17:41:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.240 17:41:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.240 17:41:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:15.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:15.240 00:31:15.240 --- 10.0.0.2 ping statistics --- 00:31:15.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.240 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:15.240 17:41:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:31:15.240 00:31:15.240 --- 10.0.0.1 ping statistics --- 00:31:15.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.240 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:15.240 17:41:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.240 17:41:23 -- nvmf/common.sh@410 -- # return 0 00:31:15.240 17:41:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:15.240 17:41:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.240 17:41:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:15.240 17:41:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:15.240 17:41:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.240 17:41:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:15.240 17:41:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:15.240 17:41:23 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:15.240 17:41:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:15.240 17:41:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:15.240 17:41:23 -- common/autotest_common.sh@10 -- # set +x 00:31:15.240 17:41:23 -- nvmf/common.sh@469 -- # nvmfpid=3378839 00:31:15.240 17:41:23 -- nvmf/common.sh@470 -- # waitforlisten 3378839 00:31:15.240 17:41:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:15.240 17:41:23 -- common/autotest_common.sh@819 -- # '[' -z 3378839 ']' 00:31:15.240 17:41:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.240 17:41:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:15.240 17:41:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.240 17:41:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:15.240 17:41:23 -- common/autotest_common.sh@10 -- # set +x 00:31:15.240 [2024-10-13 17:41:23.126320] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:15.240 [2024-10-13 17:41:23.126383] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.240 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.240 [2024-10-13 17:41:23.219396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:15.240 [2024-10-13 17:41:23.265182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:15.240 [2024-10-13 17:41:23.265337] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.240 [2024-10-13 17:41:23.265347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.240 [2024-10-13 17:41:23.265357] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.240 [2024-10-13 17:41:23.265494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.240 [2024-10-13 17:41:23.265626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.240 [2024-10-13 17:41:23.265627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.501 17:41:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:15.501 17:41:23 -- common/autotest_common.sh@852 -- # return 0 00:31:15.501 17:41:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:15.501 17:41:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:15.501 17:41:23 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 17:41:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.501 17:41:23 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:15.761 [2024-10-13 17:41:24.089030] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.761 17:41:24 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:15.761 Malloc0 00:31:16.021 17:41:24 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.021 17:41:24 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.281 17:41:24 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.281 [2024-10-13 17:41:24.793812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.541 17:41:24 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:16.541 [2024-10-13 17:41:24.958261] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:16.541 17:41:24 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:16.801 [2024-10-13 17:41:25.122784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:16.801 17:41:25 -- host/failover.sh@31 -- # bdevperf_pid=3379209 00:31:16.801 17:41:25 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:16.801 17:41:25 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.801 17:41:25 -- host/failover.sh@34 -- # waitforlisten 3379209 /var/tmp/bdevperf.sock 00:31:16.801 17:41:25 -- common/autotest_common.sh@819 -- # '[' -z 3379209 ']' 00:31:16.801 17:41:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:16.801 17:41:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:16.801 17:41:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:16.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:16.801 17:41:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:16.801 17:41:25 -- common/autotest_common.sh@10 -- # set +x 00:31:17.742 17:41:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:17.742 17:41:25 -- common/autotest_common.sh@852 -- # return 0 00:31:17.742 17:41:25 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:18.001 NVMe0n1 00:31:18.001 17:41:26 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:18.261 00:31:18.262 17:41:26 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:18.262 17:41:26 -- host/failover.sh@39 -- # run_test_pid=3379552 00:31:18.262 17:41:26 -- host/failover.sh@41 -- # sleep 1 00:31:19.203 17:41:27 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.464 [2024-10-13 17:41:27.728205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.464 [2024-10-13 17:41:27.728535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 [2024-10-13 17:41:27.728614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afbb0 is same with the state(5) to be set 00:31:19.465 17:41:27 -- host/failover.sh@45 -- # sleep 3 00:31:22.763 17:41:30 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:22.764 00:31:22.764 17:41:31 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:23.028 [2024-10-13 17:41:31.330882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.330998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 [2024-10-13 17:41:31.331183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1100 is same with the state(5) to be set 00:31:23.028 17:41:31 -- host/failover.sh@50 -- # sleep 3 00:31:26.334 17:41:34 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.334 [2024-10-13 17:41:34.511870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.334 17:41:34 -- host/failover.sh@55 -- # sleep 1 00:31:27.277 17:41:35 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:27.277 [2024-10-13 17:41:35.692471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.277 [2024-10-13 17:41:35.692548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 [2024-10-13 17:41:35.692672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b17e0 is same with the state(5) to be set 00:31:27.278 17:41:35 -- host/failover.sh@59 -- # wait 3379552 00:31:33.873 0 00:31:33.873 17:41:41 -- host/failover.sh@61 -- # killprocess 3379209 00:31:33.873 17:41:41 -- common/autotest_common.sh@926 -- # '[' -z 3379209 ']' 00:31:33.873 17:41:41 -- common/autotest_common.sh@930 -- # kill -0 3379209 00:31:33.873 17:41:41 -- common/autotest_common.sh@931 -- # uname 00:31:33.873 17:41:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:33.874 17:41:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3379209 00:31:33.874 17:41:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:33.874 17:41:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:33.874 17:41:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3379209' 00:31:33.874 killing process with pid 3379209 00:31:33.874 17:41:41 -- common/autotest_common.sh@945 -- # kill 3379209 00:31:33.874 17:41:41 -- common/autotest_common.sh@950 -- # wait 3379209 00:31:33.874 17:41:41 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:33.874 [2024-10-13 17:41:25.188419] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:33.874 [2024-10-13 17:41:25.188476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379209 ] 00:31:33.874 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.874 [2024-10-13 17:41:25.249933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.874 [2024-10-13 17:41:25.278878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.874 Running I/O for 15 seconds... 00:31:33.874 [2024-10-13 17:41:27.729172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.874 [2024-10-13 17:41:27.729456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.874 [2024-10-13 17:41:27.729473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.874 [2024-10-13 17:41:27.729539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.874 [2024-10-13 17:41:27.729589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.874 [2024-10-13 17:41:27.729625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.874 [2024-10-13 17:41:27.729674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.874 [2024-10-13 17:41:27.729805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.874 [2024-10-13 17:41:27.729816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.729823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.729839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.729856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.729873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.729889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.729905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.729921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.729939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.729955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.729972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.729988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.729998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.875 [2024-10-13 17:41:27.730325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.875 [2024-10-13 17:41:27.730511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.875 [2024-10-13 17:41:27.730520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.730952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.730985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.730994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.731017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.731070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.731137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.731153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.876 [2024-10-13 17:41:27.731186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.876 [2024-10-13 17:41:27.731202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.876 [2024-10-13 17:41:27.731226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.876 [2024-10-13 17:41:27.731234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41264 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41272 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41280 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41288 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41296 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41304 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41312 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.877 [2024-10-13 17:41:27.731419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.877 [2024-10-13 17:41:27.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41320 len:8 PRP1 0x0 PRP2 0x0 00:31:33.877 [2024-10-13 17:41:27.731432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731468] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e839b0 was disconnected and freed. reset controller. 00:31:33.877 [2024-10-13 17:41:27.731482] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:33.877 [2024-10-13 17:41:27.731502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.877 [2024-10-13 17:41:27.731510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.877 [2024-10-13 17:41:27.731525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.731533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.877 [2024-10-13 17:41:27.731540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.743342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.877 [2024-10-13 17:41:27.743376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:27.743385] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:33.877 [2024-10-13 17:41:27.743431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e648a0 (9): Bad file descriptor 00:31:33.877 [2024-10-13 17:41:27.745605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:33.877 [2024-10-13 17:41:27.779833] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:33.877 [2024-10-13 17:41:31.331532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.877 [2024-10-13 17:41:31.331863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.877 [2024-10-13 17:41:31.331895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.877 [2024-10-13 17:41:31.331938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.877 [2024-10-13 17:41:31.331946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.331956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.331964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.331973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.331980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.331989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.331996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.878 [2024-10-13 17:41:31.332384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.878 [2024-10-13 17:41:31.332532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.878 [2024-10-13 17:41:31.332541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.332977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.332986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.332993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.879 [2024-10-13 17:41:31.333212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.879 [2024-10-13 17:41:31.333228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.879 [2024-10-13 17:41:31.333237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.880 [2024-10-13 17:41:31.333642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:31.333674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.880 [2024-10-13 17:41:31.333704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.880 [2024-10-13 17:41:31.333711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:31:33.880 [2024-10-13 17:41:31.333719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333754] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e70d50 was disconnected and freed. reset controller. 00:31:33.880 [2024-10-13 17:41:31.333763] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:33.880 [2024-10-13 17:41:31.333782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.880 [2024-10-13 17:41:31.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.880 [2024-10-13 17:41:31.333806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.880 [2024-10-13 17:41:31.333821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.880 [2024-10-13 17:41:31.333836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:31.333844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:33.880 [2024-10-13 17:41:31.333866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e648a0 (9): Bad file descriptor 00:31:33.880 [2024-10-13 17:41:31.336358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:33.880 [2024-10-13 17:41:31.370747] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:33.880 [2024-10-13 17:41:35.692802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.880 [2024-10-13 17:41:35.692960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.880 [2024-10-13 17:41:35.692969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.692976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.692986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.692993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.881 [2024-10-13 17:41:35.693666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.881 [2024-10-13 17:41:35.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.881 [2024-10-13 17:41:35.693682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.693946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.693988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.693995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.694179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.694211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.694227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.694243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.882 [2024-10-13 17:41:35.694274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.882 [2024-10-13 17:41:35.694290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.882 [2024-10-13 17:41:35.694299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.883 [2024-10-13 17:41:35.694908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.883 [2024-10-13 17:41:35.694940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.694965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:33.883 [2024-10-13 17:41:35.694975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:33.883 [2024-10-13 17:41:35.694982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8008 len:8 PRP1 0x0 PRP2 0x0 00:31:33.883 [2024-10-13 17:41:35.694990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.883 [2024-10-13 17:41:35.695028] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e878e0 was disconnected and freed. reset controller. 00:31:33.883 [2024-10-13 17:41:35.695037] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:33.883 [2024-10-13 17:41:35.695056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.884 [2024-10-13 17:41:35.695069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.884 [2024-10-13 17:41:35.695078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.884 [2024-10-13 17:41:35.695085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.884 [2024-10-13 17:41:35.695093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.884 [2024-10-13 17:41:35.695101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.884 [2024-10-13 17:41:35.695109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.884 [2024-10-13 17:41:35.695116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.884 [2024-10-13 17:41:35.695124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:33.884 [2024-10-13 17:41:35.697580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:33.884 [2024-10-13 17:41:35.697602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e648a0 (9): Bad file descriptor 00:31:33.884 [2024-10-13 17:41:35.726299] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:33.884 00:31:33.884 Latency(us) 00:31:33.884 [2024-10-13T15:41:42.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.884 [2024-10-13T15:41:42.408Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:33.884 Verification LBA range: start 0x0 length 0x4000 00:31:33.884 NVMe0n1 : 15.01 19851.87 77.55 335.68 0.00 6324.14 518.83 18350.08 00:31:33.884 [2024-10-13T15:41:42.408Z] =================================================================================================================== 00:31:33.884 [2024-10-13T15:41:42.408Z] Total : 19851.87 77.55 335.68 0.00 6324.14 518.83 18350.08 00:31:33.884 Received shutdown signal, test time was about 15.000000 seconds 00:31:33.884 00:31:33.884 Latency(us) 00:31:33.884 [2024-10-13T15:41:42.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.884 [2024-10-13T15:41:42.408Z] =================================================================================================================== 00:31:33.884 [2024-10-13T15:41:42.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:33.884 17:41:41 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:33.884 17:41:41 -- host/failover.sh@65 -- # count=3 00:31:33.884 17:41:41 -- host/failover.sh@67 -- # (( count != 3 )) 00:31:33.884 17:41:41 -- host/failover.sh@73 -- # bdevperf_pid=3382480 00:31:33.884 17:41:41 -- host/failover.sh@75 -- # waitforlisten 3382480 /var/tmp/bdevperf.sock 00:31:33.884 17:41:41 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:33.884 17:41:41 -- common/autotest_common.sh@819 -- # '[' -z 3382480 ']' 00:31:33.884 17:41:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:33.884 17:41:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.884 17:41:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:33.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:33.884 17:41:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.884 17:41:41 -- common/autotest_common.sh@10 -- # set +x 00:31:34.456 17:41:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:34.456 17:41:42 -- common/autotest_common.sh@852 -- # return 0 00:31:34.456 17:41:42 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:34.456 [2024-10-13 17:41:42.900316] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:34.456 17:41:42 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:34.717 [2024-10-13 17:41:43.072751] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:34.717 17:41:43 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:34.978 NVMe0n1 00:31:34.978 17:41:43 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.240 00:31:35.240 17:41:43 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.501 00:31:35.501 17:41:43 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:35.501 17:41:43 -- host/failover.sh@82 -- # grep -q NVMe0 00:31:35.762 17:41:44 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:36.023 17:41:44 -- host/failover.sh@87 -- # sleep 3 00:31:39.323 17:41:47 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:39.323 17:41:47 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:39.323 17:41:47 -- host/failover.sh@90 -- # run_test_pid=3383616 00:31:39.323 17:41:47 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:39.323 17:41:47 -- host/failover.sh@92 -- # wait 3383616 00:31:40.264 0 00:31:40.264 17:41:48 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:40.264 [2024-10-13 17:41:41.975548] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:40.264 [2024-10-13 17:41:41.975623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382480 ] 00:31:40.264 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.264 [2024-10-13 17:41:42.039398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.264 [2024-10-13 17:41:42.067182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.264 [2024-10-13 17:41:44.288466] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:40.264 [2024-10-13 17:41:44.288511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.264 [2024-10-13 17:41:44.288522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.264 [2024-10-13 17:41:44.288531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.264 [2024-10-13 17:41:44.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.264 [2024-10-13 17:41:44.288547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.264 [2024-10-13 17:41:44.288554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.264 [2024-10-13 17:41:44.288562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.264 [2024-10-13 17:41:44.288569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.264 [2024-10-13 17:41:44.288576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:40.264 [2024-10-13 17:41:44.288598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:40.264 [2024-10-13 17:41:44.288612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17638a0 (9): Bad file descriptor 00:31:40.264 [2024-10-13 17:41:44.336498] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:40.264 Running I/O for 1 seconds... 00:31:40.264 00:31:40.264 Latency(us) 00:31:40.264 [2024-10-13T15:41:48.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.264 [2024-10-13T15:41:48.788Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:40.264 Verification LBA range: start 0x0 length 0x4000 00:31:40.264 NVMe0n1 : 1.00 20016.21 78.19 0.00 0.00 6366.31 969.39 13380.27 00:31:40.264 [2024-10-13T15:41:48.788Z] =================================================================================================================== 00:31:40.264 [2024-10-13T15:41:48.788Z] Total : 20016.21 78.19 0.00 0.00 6366.31 969.39 13380.27 00:31:40.264 17:41:48 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:40.264 17:41:48 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:40.525 17:41:48 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:40.525 17:41:48 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:40.525 17:41:48 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:40.785 17:41:49 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:41.045 17:41:49 -- host/failover.sh@101 -- # sleep 3 00:31:44.406 17:41:52 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:44.406 17:41:52 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:44.406 17:41:52 -- host/failover.sh@108 -- # killprocess 3382480 00:31:44.406 17:41:52 -- common/autotest_common.sh@926 -- # '[' -z 3382480 ']' 00:31:44.406 17:41:52 -- common/autotest_common.sh@930 -- # kill -0 3382480 00:31:44.406 17:41:52 -- common/autotest_common.sh@931 -- # uname 00:31:44.406 17:41:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.406 17:41:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3382480 00:31:44.407 17:41:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:44.407 17:41:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:44.407 17:41:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3382480' 00:31:44.407 killing process with pid 3382480 00:31:44.407 17:41:52 -- common/autotest_common.sh@945 -- # kill 3382480 00:31:44.407 17:41:52 -- common/autotest_common.sh@950 -- # wait 3382480 00:31:44.407 17:41:52 -- host/failover.sh@110 -- # sync 00:31:44.407 17:41:52 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:44.407 17:41:52 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:44.407 17:41:52 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:44.407 17:41:52 -- host/failover.sh@116 -- # nvmftestfini 00:31:44.407 17:41:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:44.407 17:41:52 -- nvmf/common.sh@116 -- # sync 00:31:44.407 17:41:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:44.407 17:41:52 -- nvmf/common.sh@119 -- # set +e 00:31:44.407 17:41:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:44.407 17:41:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:44.407 rmmod nvme_tcp 00:31:44.407 rmmod nvme_fabrics 00:31:44.407 rmmod nvme_keyring 00:31:44.679 17:41:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:44.679 17:41:52 -- nvmf/common.sh@123 -- # set -e 00:31:44.679 17:41:52 -- nvmf/common.sh@124 -- # return 0 00:31:44.679 17:41:52 -- nvmf/common.sh@477 -- # '[' -n 3378839 ']' 00:31:44.679 17:41:52 -- nvmf/common.sh@478 -- # killprocess 3378839 00:31:44.679 17:41:52 -- common/autotest_common.sh@926 -- # '[' -z 3378839 ']' 00:31:44.679 17:41:52 -- common/autotest_common.sh@930 -- # kill -0 3378839 00:31:44.679 17:41:52 -- common/autotest_common.sh@931 -- # uname 00:31:44.679 17:41:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.679 17:41:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3378839 00:31:44.679 17:41:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:44.679 17:41:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:44.679 17:41:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3378839' 00:31:44.679 killing process with pid 3378839 00:31:44.679 17:41:53 -- common/autotest_common.sh@945 -- # kill 3378839 00:31:44.679 17:41:53 -- common/autotest_common.sh@950 -- # wait 3378839 00:31:44.679 17:41:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:44.679 17:41:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:44.679 17:41:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:44.680 17:41:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.680 17:41:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:44.680 17:41:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.680 17:41:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:44.680 17:41:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.277 17:41:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:47.277 00:31:47.277 real 0m39.740s 00:31:47.277 user 2m2.166s 00:31:47.277 sys 0m8.367s 00:31:47.277 17:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.277 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:47.277 ************************************ 00:31:47.277 END TEST nvmf_failover 00:31:47.277 ************************************ 00:31:47.277 17:41:55 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:47.277 17:41:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:47.278 17:41:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:47.278 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:47.278 ************************************ 00:31:47.278 START TEST nvmf_discovery 00:31:47.278 ************************************ 00:31:47.278 17:41:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:47.278 * Looking for test storage... 00:31:47.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.278 17:41:55 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.278 17:41:55 -- nvmf/common.sh@7 -- # uname -s 00:31:47.278 17:41:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.278 17:41:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.278 17:41:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.278 17:41:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.278 17:41:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.278 17:41:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.278 17:41:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.278 17:41:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.278 17:41:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.278 17:41:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.278 17:41:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:47.278 17:41:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:47.278 17:41:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.278 17:41:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.278 17:41:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.278 17:41:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.278 17:41:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.278 17:41:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.278 17:41:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.278 17:41:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.278 17:41:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.278 17:41:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.278 17:41:55 -- paths/export.sh@5 -- # export PATH 00:31:47.278 17:41:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.278 17:41:55 -- nvmf/common.sh@46 -- # : 0 00:31:47.278 17:41:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:47.278 17:41:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:47.278 17:41:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:47.278 17:41:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.278 17:41:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.278 17:41:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:47.278 17:41:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:47.278 17:41:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:47.278 17:41:55 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:47.278 17:41:55 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:47.278 17:41:55 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:47.278 17:41:55 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:47.278 17:41:55 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:47.278 17:41:55 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:47.278 17:41:55 -- host/discovery.sh@25 -- # nvmftestinit 00:31:47.278 17:41:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:47.278 17:41:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.278 17:41:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:47.278 17:41:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:47.278 17:41:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:47.278 17:41:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.278 17:41:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.278 17:41:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.278 17:41:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:47.278 17:41:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:47.278 17:41:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:47.278 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:55.422 17:42:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:55.422 17:42:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:55.422 17:42:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:55.422 17:42:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:55.422 17:42:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:55.422 17:42:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:55.422 17:42:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:55.422 17:42:02 -- nvmf/common.sh@294 -- # net_devs=() 00:31:55.422 17:42:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:55.422 17:42:02 -- nvmf/common.sh@295 -- # e810=() 00:31:55.422 17:42:02 -- nvmf/common.sh@295 -- # local -ga e810 00:31:55.422 17:42:02 -- nvmf/common.sh@296 -- # x722=() 00:31:55.422 17:42:02 -- nvmf/common.sh@296 -- # local -ga x722 00:31:55.422 17:42:02 -- nvmf/common.sh@297 -- # mlx=() 00:31:55.422 17:42:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:55.422 17:42:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.422 17:42:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:55.423 17:42:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:55.423 17:42:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:55.423 17:42:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:55.423 17:42:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:55.423 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:55.423 17:42:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:55.423 17:42:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:55.423 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:55.423 17:42:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:55.423 17:42:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:55.423 17:42:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.423 17:42:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:55.423 17:42:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.423 17:42:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:55.423 Found net devices under 0000:31:00.0: cvl_0_0 00:31:55.423 17:42:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.423 17:42:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:55.423 17:42:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.423 17:42:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:55.423 17:42:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.423 17:42:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:55.423 Found net devices under 0000:31:00.1: cvl_0_1 00:31:55.423 17:42:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.423 17:42:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:55.423 17:42:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:55.423 17:42:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:55.423 17:42:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.423 17:42:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.423 17:42:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.423 17:42:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:55.423 17:42:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.423 17:42:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.423 17:42:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:55.423 17:42:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.423 17:42:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.423 17:42:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:55.423 17:42:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:55.423 17:42:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.423 17:42:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.423 17:42:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.423 17:42:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.423 17:42:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:55.423 17:42:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.423 17:42:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.423 17:42:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.423 17:42:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:55.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:31:55.423 00:31:55.423 --- 10.0.0.2 ping statistics --- 00:31:55.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.423 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:31:55.423 17:42:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:31:55.423 00:31:55.423 --- 10.0.0.1 ping statistics --- 00:31:55.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.423 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:31:55.423 17:42:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.423 17:42:02 -- nvmf/common.sh@410 -- # return 0 00:31:55.423 17:42:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:55.423 17:42:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.423 17:42:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:55.423 17:42:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.423 17:42:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:55.423 17:42:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:55.423 17:42:02 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:55.423 17:42:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:55.423 17:42:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:55.423 17:42:02 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 17:42:02 -- nvmf/common.sh@469 -- # nvmfpid=3388842 00:31:55.423 17:42:02 -- nvmf/common.sh@470 -- # waitforlisten 3388842 00:31:55.423 17:42:02 -- common/autotest_common.sh@819 -- # '[' -z 3388842 ']' 00:31:55.423 17:42:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.423 17:42:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:55.423 17:42:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.423 17:42:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:55.423 17:42:02 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 17:42:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:55.423 [2024-10-13 17:42:02.849988] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:55.423 [2024-10-13 17:42:02.850075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.423 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.423 [2024-10-13 17:42:02.943964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.423 [2024-10-13 17:42:02.987951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:55.423 [2024-10-13 17:42:02.988102] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.423 [2024-10-13 17:42:02.988112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.423 [2024-10-13 17:42:02.988120] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.423 [2024-10-13 17:42:02.988142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.423 17:42:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.423 17:42:03 -- common/autotest_common.sh@852 -- # return 0 00:31:55.423 17:42:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:55.423 17:42:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 17:42:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.423 17:42:03 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.423 17:42:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 [2024-10-13 17:42:03.676907] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.423 17:42:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.423 17:42:03 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:55.423 17:42:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 [2024-10-13 17:42:03.685167] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:55.423 17:42:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.423 17:42:03 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:55.423 17:42:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 null0 00:31:55.423 17:42:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.423 17:42:03 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:55.423 17:42:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 null1 00:31:55.423 17:42:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.423 17:42:03 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:55.423 17:42:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 17:42:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.423 17:42:03 -- host/discovery.sh@45 -- # hostpid=3389113 00:31:55.423 17:42:03 -- host/discovery.sh@46 -- # waitforlisten 3389113 /tmp/host.sock 00:31:55.423 17:42:03 -- common/autotest_common.sh@819 -- # '[' -z 3389113 ']' 00:31:55.423 17:42:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:55.423 17:42:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:55.423 17:42:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:55.423 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:55.423 17:42:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:55.423 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:31:55.423 17:42:03 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:55.423 [2024-10-13 17:42:03.764357] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:55.423 [2024-10-13 17:42:03.764417] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389113 ] 00:31:55.423 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.423 [2024-10-13 17:42:03.830615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.424 [2024-10-13 17:42:03.867578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:55.424 [2024-10-13 17:42:03.867742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.366 17:42:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:56.366 17:42:04 -- common/autotest_common.sh@852 -- # return 0 00:31:56.366 17:42:04 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.366 17:42:04 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:56.366 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.366 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.366 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.366 17:42:04 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:56.366 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.366 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.366 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.366 17:42:04 -- host/discovery.sh@72 -- # notify_id=0 00:31:56.366 17:42:04 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:56.366 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.366 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # sort 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # xargs 00:31:56.366 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.366 17:42:04 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:56.366 17:42:04 -- host/discovery.sh@79 -- # get_bdev_list 00:31:56.366 17:42:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.366 17:42:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:56.366 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.366 17:42:04 -- host/discovery.sh@55 -- # sort 00:31:56.366 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.366 17:42:04 -- host/discovery.sh@55 -- # xargs 00:31:56.366 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.366 17:42:04 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:56.366 17:42:04 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.366 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.366 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.366 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.366 17:42:04 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:56.366 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.366 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # sort 00:31:56.366 17:42:04 -- host/discovery.sh@59 -- # xargs 00:31:56.366 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.366 17:42:04 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:56.366 17:42:04 -- host/discovery.sh@83 -- # get_bdev_list 00:31:56.366 17:42:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:56.367 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # sort 00:31:56.367 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # xargs 00:31:56.367 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.367 17:42:04 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:56.367 17:42:04 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:56.367 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.367 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.367 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.367 17:42:04 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:56.367 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # sort 00:31:56.367 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # xargs 00:31:56.367 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.367 17:42:04 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:56.367 17:42:04 -- host/discovery.sh@87 -- # get_bdev_list 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:56.367 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # sort 00:31:56.367 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.367 17:42:04 -- host/discovery.sh@55 -- # xargs 00:31:56.367 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.367 17:42:04 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:56.367 17:42:04 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:56.367 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.367 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.367 [2024-10-13 17:42:04.876181] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.367 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.367 17:42:04 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:56.367 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.367 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # sort 00:31:56.367 17:42:04 -- host/discovery.sh@59 -- # xargs 00:31:56.628 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.628 17:42:04 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:56.628 17:42:04 -- host/discovery.sh@93 -- # get_bdev_list 00:31:56.628 17:42:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.628 17:42:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:56.628 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.628 17:42:04 -- host/discovery.sh@55 -- # sort 00:31:56.628 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.628 17:42:04 -- host/discovery.sh@55 -- # xargs 00:31:56.628 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.628 17:42:04 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:56.628 17:42:04 -- host/discovery.sh@94 -- # get_notification_count 00:31:56.628 17:42:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:56.628 17:42:04 -- host/discovery.sh@74 -- # jq '. | length' 00:31:56.628 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.628 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:56.628 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.628 17:42:05 -- host/discovery.sh@74 -- # notification_count=0 00:31:56.628 17:42:05 -- host/discovery.sh@75 -- # notify_id=0 00:31:56.628 17:42:05 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:56.628 17:42:05 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:56.628 17:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.628 17:42:05 -- common/autotest_common.sh@10 -- # set +x 00:31:56.628 17:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.628 17:42:05 -- host/discovery.sh@100 -- # sleep 1 00:31:57.206 [2024-10-13 17:42:05.605263] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:57.206 [2024-10-13 17:42:05.605287] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:57.206 [2024-10-13 17:42:05.605300] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:57.206 [2024-10-13 17:42:05.691576] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:57.467 [2024-10-13 17:42:05.788891] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:57.467 [2024-10-13 17:42:05.788912] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:57.728 17:42:06 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:57.728 17:42:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:57.728 17:42:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:57.728 17:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.728 17:42:06 -- host/discovery.sh@59 -- # sort 00:31:57.728 17:42:06 -- common/autotest_common.sh@10 -- # set +x 00:31:57.728 17:42:06 -- host/discovery.sh@59 -- # xargs 00:31:57.728 17:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@102 -- # get_bdev_list 00:31:57.728 17:42:06 -- host/discovery.sh@55 -- # xargs 00:31:57.728 17:42:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.728 17:42:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.728 17:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.728 17:42:06 -- host/discovery.sh@55 -- # sort 00:31:57.728 17:42:06 -- common/autotest_common.sh@10 -- # set +x 00:31:57.728 17:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:57.728 17:42:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:57.728 17:42:06 -- host/discovery.sh@63 -- # xargs 00:31:57.728 17:42:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:57.728 17:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.728 17:42:06 -- host/discovery.sh@63 -- # sort -n 00:31:57.728 17:42:06 -- common/autotest_common.sh@10 -- # set +x 00:31:57.728 17:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@104 -- # get_notification_count 00:31:57.728 17:42:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:57.728 17:42:06 -- host/discovery.sh@74 -- # jq '. | length' 00:31:57.728 17:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.728 17:42:06 -- common/autotest_common.sh@10 -- # set +x 00:31:57.728 17:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@74 -- # notification_count=1 00:31:57.728 17:42:06 -- host/discovery.sh@75 -- # notify_id=1 00:31:57.728 17:42:06 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:57.728 17:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.728 17:42:06 -- common/autotest_common.sh@10 -- # set +x 00:31:57.728 17:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.728 17:42:06 -- host/discovery.sh@109 -- # sleep 1 00:31:59.112 17:42:07 -- host/discovery.sh@110 -- # get_bdev_list 00:31:59.112 17:42:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.112 17:42:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:59.112 17:42:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.112 17:42:07 -- host/discovery.sh@55 -- # sort 00:31:59.112 17:42:07 -- common/autotest_common.sh@10 -- # set +x 00:31:59.112 17:42:07 -- host/discovery.sh@55 -- # xargs 00:31:59.112 17:42:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.112 17:42:07 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:59.112 17:42:07 -- host/discovery.sh@111 -- # get_notification_count 00:31:59.112 17:42:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:59.112 17:42:07 -- host/discovery.sh@74 -- # jq '. | length' 00:31:59.112 17:42:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.112 17:42:07 -- common/autotest_common.sh@10 -- # set +x 00:31:59.112 17:42:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.112 17:42:07 -- host/discovery.sh@74 -- # notification_count=1 00:31:59.112 17:42:07 -- host/discovery.sh@75 -- # notify_id=2 00:31:59.112 17:42:07 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:59.112 17:42:07 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:59.112 17:42:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.112 17:42:07 -- common/autotest_common.sh@10 -- # set +x 00:31:59.112 [2024-10-13 17:42:07.342996] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:59.112 [2024-10-13 17:42:07.343341] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:59.112 [2024-10-13 17:42:07.343369] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:59.112 17:42:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.112 17:42:07 -- host/discovery.sh@117 -- # sleep 1 00:31:59.112 [2024-10-13 17:42:07.430599] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:59.372 [2024-10-13 17:42:07.694897] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:59.372 [2024-10-13 17:42:07.694915] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:59.373 [2024-10-13 17:42:07.694921] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:59.955 17:42:08 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:59.955 17:42:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:59.955 17:42:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:59.955 17:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.955 17:42:08 -- host/discovery.sh@59 -- # sort 00:31:59.955 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:31:59.955 17:42:08 -- host/discovery.sh@59 -- # xargs 00:31:59.955 17:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.955 17:42:08 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.955 17:42:08 -- host/discovery.sh@119 -- # get_bdev_list 00:31:59.955 17:42:08 -- host/discovery.sh@55 -- # sort 00:31:59.955 17:42:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.955 17:42:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:59.955 17:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.955 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:31:59.955 17:42:08 -- host/discovery.sh@55 -- # xargs 00:31:59.955 17:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.955 17:42:08 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:59.955 17:42:08 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:59.955 17:42:08 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:59.955 17:42:08 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:59.955 17:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.955 17:42:08 -- host/discovery.sh@63 -- # sort -n 00:31:59.955 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:31:59.955 17:42:08 -- host/discovery.sh@63 -- # xargs 00:31:59.955 17:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.217 17:42:08 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:00.217 17:42:08 -- host/discovery.sh@121 -- # get_notification_count 00:32:00.217 17:42:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:00.217 17:42:08 -- host/discovery.sh@74 -- # jq '. | length' 00:32:00.217 17:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.217 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:32:00.217 17:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.217 17:42:08 -- host/discovery.sh@74 -- # notification_count=0 00:32:00.217 17:42:08 -- host/discovery.sh@75 -- # notify_id=2 00:32:00.217 17:42:08 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:32:00.217 17:42:08 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.217 17:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.217 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:32:00.217 [2024-10-13 17:42:08.562340] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:00.217 [2024-10-13 17:42:08.562361] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:00.217 17:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.217 17:42:08 -- host/discovery.sh@127 -- # sleep 1 00:32:00.217 [2024-10-13 17:42:08.568444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.217 [2024-10-13 17:42:08.568462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.217 [2024-10-13 17:42:08.568472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.217 [2024-10-13 17:42:08.568479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.217 [2024-10-13 17:42:08.568488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.217 [2024-10-13 17:42:08.568495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.217 [2024-10-13 17:42:08.568503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.217 [2024-10-13 17:42:08.568510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.217 [2024-10-13 17:42:08.568517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.217 [2024-10-13 17:42:08.578459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.217 [2024-10-13 17:42:08.588499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.217 [2024-10-13 17:42:08.588834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.217 [2024-10-13 17:42:08.589274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.217 [2024-10-13 17:42:08.589313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.217 [2024-10-13 17:42:08.589326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.217 [2024-10-13 17:42:08.589347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.217 [2024-10-13 17:42:08.589364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.217 [2024-10-13 17:42:08.589371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.217 [2024-10-13 17:42:08.589380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.217 [2024-10-13 17:42:08.589397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.217 [2024-10-13 17:42:08.598555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.217 [2024-10-13 17:42:08.598775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.217 [2024-10-13 17:42:08.599082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.217 [2024-10-13 17:42:08.599093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.217 [2024-10-13 17:42:08.599101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.217 [2024-10-13 17:42:08.599113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.217 [2024-10-13 17:42:08.599124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.217 [2024-10-13 17:42:08.599130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.599138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.599149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.608609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.608932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.609278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.609316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.609328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.609348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.609360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.609367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.609376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.609391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.618666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.619059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.619513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.619552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.619562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.619581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.619607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.619620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.619629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.619643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.628723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.629081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.629371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.629382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.629390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.629402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.629412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.629419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.629426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.629437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.638777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.639068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.639375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.639387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.639394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.639405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.639415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.639422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.639429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.639439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.648830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.649306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.649658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.649672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.649682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.649700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.649749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.649759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.649772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.649787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.658885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.659212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.659553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.659564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.659572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.659584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.659594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.659600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.659607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.659618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.668943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.669262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.669556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.669566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.669574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.669585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.669595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.669601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.669609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.669619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.678995] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.679288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.679589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.679599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.679606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.679617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.679627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.679634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.679640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.679655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.689046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:00.218 [2024-10-13 17:42:08.689367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.689664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.218 [2024-10-13 17:42:08.689674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5cca0 with addr=10.0.0.2, port=4420 00:32:00.218 [2024-10-13 17:42:08.689681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5cca0 is same with the state(5) to be set 00:32:00.218 [2024-10-13 17:42:08.689691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5cca0 (9): Bad file descriptor 00:32:00.218 [2024-10-13 17:42:08.689702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.218 [2024-10-13 17:42:08.689708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:00.218 [2024-10-13 17:42:08.689715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.218 [2024-10-13 17:42:08.689725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:00.218 [2024-10-13 17:42:08.689854] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:00.218 [2024-10-13 17:42:08.689871] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:01.160 17:42:09 -- host/discovery.sh@128 -- # get_subsystem_names 00:32:01.160 17:42:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:01.160 17:42:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:01.160 17:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.160 17:42:09 -- host/discovery.sh@59 -- # sort 00:32:01.160 17:42:09 -- common/autotest_common.sh@10 -- # set +x 00:32:01.160 17:42:09 -- host/discovery.sh@59 -- # xargs 00:32:01.160 17:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:01.160 17:42:09 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.160 17:42:09 -- host/discovery.sh@129 -- # get_bdev_list 00:32:01.160 17:42:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.160 17:42:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:01.160 17:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.160 17:42:09 -- host/discovery.sh@55 -- # sort 00:32:01.160 17:42:09 -- common/autotest_common.sh@10 -- # set +x 00:32:01.160 17:42:09 -- host/discovery.sh@55 -- # xargs 00:32:01.160 17:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:01.160 17:42:09 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:01.160 17:42:09 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:32:01.160 17:42:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:01.160 17:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.160 17:42:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:01.160 17:42:09 -- common/autotest_common.sh@10 -- # set +x 00:32:01.160 17:42:09 -- host/discovery.sh@63 -- # sort -n 00:32:01.160 17:42:09 -- host/discovery.sh@63 -- # xargs 00:32:01.420 17:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:01.421 17:42:09 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:32:01.421 17:42:09 -- host/discovery.sh@131 -- # get_notification_count 00:32:01.421 17:42:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:01.421 17:42:09 -- host/discovery.sh@74 -- # jq '. | length' 00:32:01.421 17:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.421 17:42:09 -- common/autotest_common.sh@10 -- # set +x 00:32:01.421 17:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:01.421 17:42:09 -- host/discovery.sh@74 -- # notification_count=0 00:32:01.421 17:42:09 -- host/discovery.sh@75 -- # notify_id=2 00:32:01.421 17:42:09 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:32:01.421 17:42:09 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:01.421 17:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.421 17:42:09 -- common/autotest_common.sh@10 -- # set +x 00:32:01.421 17:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:01.421 17:42:09 -- host/discovery.sh@135 -- # sleep 1 00:32:02.364 17:42:10 -- host/discovery.sh@136 -- # get_subsystem_names 00:32:02.364 17:42:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:02.364 17:42:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:02.364 17:42:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.364 17:42:10 -- host/discovery.sh@59 -- # sort 00:32:02.364 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:32:02.364 17:42:10 -- host/discovery.sh@59 -- # xargs 00:32:02.364 17:42:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.364 17:42:10 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:32:02.364 17:42:10 -- host/discovery.sh@137 -- # get_bdev_list 00:32:02.364 17:42:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.364 17:42:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:02.364 17:42:10 -- host/discovery.sh@55 -- # sort 00:32:02.364 17:42:10 -- host/discovery.sh@55 -- # xargs 00:32:02.364 17:42:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.364 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:32:02.364 17:42:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.624 17:42:10 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:32:02.624 17:42:10 -- host/discovery.sh@138 -- # get_notification_count 00:32:02.624 17:42:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:02.624 17:42:10 -- host/discovery.sh@74 -- # jq '. | length' 00:32:02.624 17:42:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.624 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:32:02.624 17:42:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.624 17:42:10 -- host/discovery.sh@74 -- # notification_count=2 00:32:02.624 17:42:10 -- host/discovery.sh@75 -- # notify_id=4 00:32:02.624 17:42:10 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:32:02.624 17:42:10 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:02.624 17:42:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.624 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:32:03.566 [2024-10-13 17:42:11.961722] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:03.566 [2024-10-13 17:42:11.961742] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:03.566 [2024-10-13 17:42:11.961755] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:03.566 [2024-10-13 17:42:12.050028] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:03.827 [2024-10-13 17:42:12.114748] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:03.827 [2024-10-13 17:42:12.114779] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.827 17:42:12 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.827 17:42:12 -- common/autotest_common.sh@640 -- # local es=0 00:32:03.827 17:42:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.827 17:42:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:03.827 17:42:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.827 17:42:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:03.827 17:42:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.827 17:42:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.827 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.827 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:03.827 request: 00:32:03.827 { 00:32:03.827 "name": "nvme", 00:32:03.827 "trtype": "tcp", 00:32:03.827 "traddr": "10.0.0.2", 00:32:03.827 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:03.827 "adrfam": "ipv4", 00:32:03.827 "trsvcid": "8009", 00:32:03.827 "wait_for_attach": true, 00:32:03.827 "method": "bdev_nvme_start_discovery", 00:32:03.827 "req_id": 1 00:32:03.827 } 00:32:03.827 Got JSON-RPC error response 00:32:03.827 response: 00:32:03.827 { 00:32:03.827 "code": -17, 00:32:03.827 "message": "File exists" 00:32:03.827 } 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:03.827 17:42:12 -- common/autotest_common.sh@643 -- # es=1 00:32:03.827 17:42:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:03.827 17:42:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:03.827 17:42:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:03.827 17:42:12 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:03.827 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # sort 00:32:03.827 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # xargs 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.827 17:42:12 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:32:03.827 17:42:12 -- host/discovery.sh@147 -- # get_bdev_list 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:03.827 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # sort 00:32:03.827 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # xargs 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.827 17:42:12 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:03.827 17:42:12 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.827 17:42:12 -- common/autotest_common.sh@640 -- # local es=0 00:32:03.827 17:42:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.827 17:42:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:03.827 17:42:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.827 17:42:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:03.827 17:42:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.827 17:42:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.827 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.827 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:03.827 request: 00:32:03.827 { 00:32:03.827 "name": "nvme_second", 00:32:03.827 "trtype": "tcp", 00:32:03.827 "traddr": "10.0.0.2", 00:32:03.827 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:03.827 "adrfam": "ipv4", 00:32:03.827 "trsvcid": "8009", 00:32:03.827 "wait_for_attach": true, 00:32:03.827 "method": "bdev_nvme_start_discovery", 00:32:03.827 "req_id": 1 00:32:03.827 } 00:32:03.827 Got JSON-RPC error response 00:32:03.827 response: 00:32:03.827 { 00:32:03.827 "code": -17, 00:32:03.827 "message": "File exists" 00:32:03.827 } 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:03.827 17:42:12 -- common/autotest_common.sh@643 -- # es=1 00:32:03.827 17:42:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:03.827 17:42:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:03.827 17:42:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:03.827 17:42:12 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:03.827 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # sort 00:32:03.827 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:03.827 17:42:12 -- host/discovery.sh@67 -- # xargs 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.827 17:42:12 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:32:03.827 17:42:12 -- host/discovery.sh@153 -- # get_bdev_list 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # xargs 00:32:03.827 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.827 17:42:12 -- host/discovery.sh@55 -- # sort 00:32:03.827 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:03.827 17:42:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.827 17:42:12 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:03.828 17:42:12 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:03.828 17:42:12 -- common/autotest_common.sh@640 -- # local es=0 00:32:03.828 17:42:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:03.828 17:42:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:03.828 17:42:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.828 17:42:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:04.088 17:42:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:04.088 17:42:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:04.088 17:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.088 17:42:12 -- common/autotest_common.sh@10 -- # set +x 00:32:05.030 [2024-10-13 17:42:13.362118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.030 [2024-10-13 17:42:13.362338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.030 [2024-10-13 17:42:13.362350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa77b00 with addr=10.0.0.2, port=8010 00:32:05.030 [2024-10-13 17:42:13.362362] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:05.030 [2024-10-13 17:42:13.362370] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:05.030 [2024-10-13 17:42:13.362378] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:05.972 [2024-10-13 17:42:14.364532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-10-13 17:42:14.364726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-10-13 17:42:14.364738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8d710 with addr=10.0.0.2, port=8010 00:32:05.972 [2024-10-13 17:42:14.364749] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:05.972 [2024-10-13 17:42:14.364756] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:05.972 [2024-10-13 17:42:14.364763] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:06.914 [2024-10-13 17:42:15.366550] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:06.914 request: 00:32:06.914 { 00:32:06.914 "name": "nvme_second", 00:32:06.914 "trtype": "tcp", 00:32:06.914 "traddr": "10.0.0.2", 00:32:06.914 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:06.914 "adrfam": "ipv4", 00:32:06.914 "trsvcid": "8010", 00:32:06.914 "attach_timeout_ms": 3000, 00:32:06.914 "method": "bdev_nvme_start_discovery", 00:32:06.914 "req_id": 1 00:32:06.914 } 00:32:06.914 Got JSON-RPC error response 00:32:06.914 response: 00:32:06.914 { 00:32:06.914 "code": -110, 00:32:06.914 "message": "Connection timed out" 00:32:06.914 } 00:32:06.914 17:42:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:06.914 17:42:15 -- common/autotest_common.sh@643 -- # es=1 00:32:06.914 17:42:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:06.914 17:42:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:06.914 17:42:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:06.914 17:42:15 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:32:06.914 17:42:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:06.914 17:42:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:06.914 17:42:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.914 17:42:15 -- host/discovery.sh@67 -- # sort 00:32:06.914 17:42:15 -- common/autotest_common.sh@10 -- # set +x 00:32:06.914 17:42:15 -- host/discovery.sh@67 -- # xargs 00:32:06.914 17:42:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.914 17:42:15 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:32:06.914 17:42:15 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:32:06.914 17:42:15 -- host/discovery.sh@162 -- # kill 3389113 00:32:06.914 17:42:15 -- host/discovery.sh@163 -- # nvmftestfini 00:32:06.914 17:42:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:06.914 17:42:15 -- nvmf/common.sh@116 -- # sync 00:32:06.914 17:42:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:06.914 17:42:15 -- nvmf/common.sh@119 -- # set +e 00:32:06.914 17:42:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:06.914 17:42:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:06.914 rmmod nvme_tcp 00:32:07.175 rmmod nvme_fabrics 00:32:07.175 rmmod nvme_keyring 00:32:07.175 17:42:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:07.175 17:42:15 -- nvmf/common.sh@123 -- # set -e 00:32:07.175 17:42:15 -- nvmf/common.sh@124 -- # return 0 00:32:07.175 17:42:15 -- nvmf/common.sh@477 -- # '[' -n 3388842 ']' 00:32:07.175 17:42:15 -- nvmf/common.sh@478 -- # killprocess 3388842 00:32:07.175 17:42:15 -- common/autotest_common.sh@926 -- # '[' -z 3388842 ']' 00:32:07.175 17:42:15 -- common/autotest_common.sh@930 -- # kill -0 3388842 00:32:07.175 17:42:15 -- common/autotest_common.sh@931 -- # uname 00:32:07.175 17:42:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:07.175 17:42:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3388842 00:32:07.175 17:42:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:07.175 17:42:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:07.175 17:42:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3388842' 00:32:07.175 killing process with pid 3388842 00:32:07.175 17:42:15 -- common/autotest_common.sh@945 -- # kill 3388842 00:32:07.175 17:42:15 -- common/autotest_common.sh@950 -- # wait 3388842 00:32:07.175 17:42:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:07.175 17:42:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:07.175 17:42:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:07.175 17:42:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.175 17:42:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:07.175 17:42:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.175 17:42:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:07.175 17:42:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.721 17:42:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:09.721 00:32:09.721 real 0m22.505s 00:32:09.721 user 0m28.062s 00:32:09.721 sys 0m7.022s 00:32:09.721 17:42:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.721 17:42:17 -- common/autotest_common.sh@10 -- # set +x 00:32:09.721 ************************************ 00:32:09.721 END TEST nvmf_discovery 00:32:09.721 ************************************ 00:32:09.721 17:42:17 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:09.721 17:42:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:09.721 17:42:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.721 17:42:17 -- common/autotest_common.sh@10 -- # set +x 00:32:09.721 ************************************ 00:32:09.721 START TEST nvmf_discovery_remove_ifc 00:32:09.721 ************************************ 00:32:09.721 17:42:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:09.721 * Looking for test storage... 00:32:09.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.721 17:42:17 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.721 17:42:17 -- nvmf/common.sh@7 -- # uname -s 00:32:09.721 17:42:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.721 17:42:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.722 17:42:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.722 17:42:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.722 17:42:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.722 17:42:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.722 17:42:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.722 17:42:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.722 17:42:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.722 17:42:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.722 17:42:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.722 17:42:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.722 17:42:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.722 17:42:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.722 17:42:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.722 17:42:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.722 17:42:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.722 17:42:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.722 17:42:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.722 17:42:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.722 17:42:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.722 17:42:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.722 17:42:17 -- paths/export.sh@5 -- # export PATH 00:32:09.722 17:42:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.722 17:42:17 -- nvmf/common.sh@46 -- # : 0 00:32:09.722 17:42:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:09.722 17:42:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:09.722 17:42:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:09.722 17:42:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.722 17:42:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.722 17:42:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:09.722 17:42:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:09.722 17:42:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:09.722 17:42:17 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:09.722 17:42:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:09.722 17:42:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.722 17:42:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:09.722 17:42:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:09.722 17:42:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:09.722 17:42:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.722 17:42:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.722 17:42:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.722 17:42:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:09.722 17:42:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:09.722 17:42:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:09.722 17:42:17 -- common/autotest_common.sh@10 -- # set +x 00:32:17.859 17:42:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:17.859 17:42:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:17.859 17:42:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:17.859 17:42:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:17.859 17:42:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:17.859 17:42:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:17.859 17:42:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:17.860 17:42:24 -- nvmf/common.sh@294 -- # net_devs=() 00:32:17.860 17:42:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:17.860 17:42:24 -- nvmf/common.sh@295 -- # e810=() 00:32:17.860 17:42:24 -- nvmf/common.sh@295 -- # local -ga e810 00:32:17.860 17:42:24 -- nvmf/common.sh@296 -- # x722=() 00:32:17.860 17:42:24 -- nvmf/common.sh@296 -- # local -ga x722 00:32:17.860 17:42:24 -- nvmf/common.sh@297 -- # mlx=() 00:32:17.860 17:42:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:17.860 17:42:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.860 17:42:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:17.860 17:42:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:17.860 17:42:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:17.860 17:42:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:17.860 17:42:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:17.860 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:17.860 17:42:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:17.860 17:42:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:17.860 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:17.860 17:42:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:17.860 17:42:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:17.860 17:42:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.860 17:42:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:17.860 17:42:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.860 17:42:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:17.860 Found net devices under 0000:31:00.0: cvl_0_0 00:32:17.860 17:42:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.860 17:42:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:17.860 17:42:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.860 17:42:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:17.860 17:42:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.860 17:42:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:17.860 Found net devices under 0000:31:00.1: cvl_0_1 00:32:17.860 17:42:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.860 17:42:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:17.860 17:42:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:17.860 17:42:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:17.860 17:42:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:17.860 17:42:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.860 17:42:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.860 17:42:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.860 17:42:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:17.860 17:42:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.860 17:42:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.860 17:42:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:17.860 17:42:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.860 17:42:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.860 17:42:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:17.860 17:42:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:17.860 17:42:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.860 17:42:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.860 17:42:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.860 17:42:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.860 17:42:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:17.860 17:42:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.860 17:42:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.860 17:42:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.860 17:42:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:17.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:32:17.860 00:32:17.860 --- 10.0.0.2 ping statistics --- 00:32:17.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.860 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:32:17.860 17:42:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:32:17.860 00:32:17.860 --- 10.0.0.1 ping statistics --- 00:32:17.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.860 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:32:17.860 17:42:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.860 17:42:25 -- nvmf/common.sh@410 -- # return 0 00:32:17.860 17:42:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:17.860 17:42:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.860 17:42:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:17.860 17:42:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:17.860 17:42:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.860 17:42:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:17.860 17:42:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:17.860 17:42:25 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:17.860 17:42:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:17.860 17:42:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:17.860 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.860 17:42:25 -- nvmf/common.sh@469 -- # nvmfpid=3396145 00:32:17.860 17:42:25 -- nvmf/common.sh@470 -- # waitforlisten 3396145 00:32:17.860 17:42:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:17.860 17:42:25 -- common/autotest_common.sh@819 -- # '[' -z 3396145 ']' 00:32:17.860 17:42:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.860 17:42:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:17.860 17:42:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.860 17:42:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:17.860 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.860 [2024-10-13 17:42:25.280538] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:17.860 [2024-10-13 17:42:25.280604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.860 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.860 [2024-10-13 17:42:25.369430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.860 [2024-10-13 17:42:25.414585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:17.860 [2024-10-13 17:42:25.414727] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.860 [2024-10-13 17:42:25.414737] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.860 [2024-10-13 17:42:25.414744] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.860 [2024-10-13 17:42:25.414775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.860 17:42:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:17.860 17:42:26 -- common/autotest_common.sh@852 -- # return 0 00:32:17.860 17:42:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:17.860 17:42:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:17.860 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:32:17.860 17:42:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.860 17:42:26 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:17.860 17:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.860 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:32:17.860 [2024-10-13 17:42:26.127321] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.860 [2024-10-13 17:42:26.135556] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:17.860 null0 00:32:17.860 [2024-10-13 17:42:26.167543] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.860 17:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.860 17:42:26 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:17.860 17:42:26 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3396282 00:32:17.860 17:42:26 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3396282 /tmp/host.sock 00:32:17.860 17:42:26 -- common/autotest_common.sh@819 -- # '[' -z 3396282 ']' 00:32:17.860 17:42:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:32:17.860 17:42:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:17.860 17:42:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:17.860 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:17.860 17:42:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:17.860 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:32:17.861 [2024-10-13 17:42:26.214453] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:17.861 [2024-10-13 17:42:26.214501] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396282 ] 00:32:17.861 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.861 [2024-10-13 17:42:26.273882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.861 [2024-10-13 17:42:26.311387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:17.861 [2024-10-13 17:42:26.311536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.861 17:42:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:17.861 17:42:26 -- common/autotest_common.sh@852 -- # return 0 00:32:17.861 17:42:26 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.861 17:42:26 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:17.861 17:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.861 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:32:17.861 17:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.861 17:42:26 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:17.861 17:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.861 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:32:18.121 17:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:18.121 17:42:26 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:18.121 17:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:18.121 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:32:19.061 [2024-10-13 17:42:27.484239] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:19.061 [2024-10-13 17:42:27.484263] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:19.061 [2024-10-13 17:42:27.484276] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:19.061 [2024-10-13 17:42:27.572561] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:19.322 [2024-10-13 17:42:27.634713] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:19.322 [2024-10-13 17:42:27.634751] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:19.322 [2024-10-13 17:42:27.634774] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:19.322 [2024-10-13 17:42:27.634789] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:19.322 [2024-10-13 17:42:27.634810] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:19.322 17:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:19.322 17:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.322 17:42:27 -- common/autotest_common.sh@10 -- # set +x 00:32:19.322 [2024-10-13 17:42:27.643420] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcf44d0 was disconnected and freed. delete nvme_qpair. 00:32:19.322 17:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:19.322 17:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:19.322 17:42:27 -- common/autotest_common.sh@10 -- # set +x 00:32:19.322 17:42:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:19.322 17:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.582 17:42:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:19.582 17:42:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:20.522 17:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.522 17:42:28 -- common/autotest_common.sh@10 -- # set +x 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:20.522 17:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:20.522 17:42:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.462 17:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.462 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.462 17:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:21.462 17:42:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:22.844 17:42:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.844 17:42:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.844 17:42:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.844 17:42:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:22.844 17:42:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.844 17:42:30 -- common/autotest_common.sh@10 -- # set +x 00:32:22.844 17:42:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.844 17:42:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:22.844 17:42:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:22.844 17:42:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.785 17:42:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.785 17:42:32 -- common/autotest_common.sh@10 -- # set +x 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.785 17:42:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:23.785 17:42:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:24.726 [2024-10-13 17:42:33.075527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:24.726 [2024-10-13 17:42:33.075576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.726 [2024-10-13 17:42:33.075588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.726 [2024-10-13 17:42:33.075598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.726 [2024-10-13 17:42:33.075606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.726 [2024-10-13 17:42:33.075615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.726 [2024-10-13 17:42:33.075622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.726 [2024-10-13 17:42:33.075630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.726 [2024-10-13 17:42:33.075638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.726 [2024-10-13 17:42:33.075651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.726 [2024-10-13 17:42:33.075658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.726 [2024-10-13 17:42:33.075666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba960 is same with the state(5) to be set 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.726 [2024-10-13 17:42:33.085548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcba960 (9): Bad file descriptor 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.726 17:42:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.726 17:42:33 -- common/autotest_common.sh@10 -- # set +x 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.726 [2024-10-13 17:42:33.095593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:24.726 17:42:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:24.726 17:42:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.665 17:42:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.665 17:42:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.665 17:42:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.665 17:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.665 17:42:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.665 17:42:34 -- common/autotest_common.sh@10 -- # set +x 00:32:25.665 17:42:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.665 [2024-10-13 17:42:34.153112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:27.047 [2024-10-13 17:42:35.177091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:27.047 [2024-10-13 17:42:35.177135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcba960 with addr=10.0.0.2, port=4420 00:32:27.047 [2024-10-13 17:42:35.177148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba960 is same with the state(5) to be set 00:32:27.047 [2024-10-13 17:42:35.177171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.047 [2024-10-13 17:42:35.177180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.047 [2024-10-13 17:42:35.177187] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.047 [2024-10-13 17:42:35.177195] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:27.047 [2024-10-13 17:42:35.177545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcba960 (9): Bad file descriptor 00:32:27.047 [2024-10-13 17:42:35.177568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.047 [2024-10-13 17:42:35.177588] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:27.047 [2024-10-13 17:42:35.177610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.047 [2024-10-13 17:42:35.177621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.047 [2024-10-13 17:42:35.177632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.047 [2024-10-13 17:42:35.177641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.047 [2024-10-13 17:42:35.177649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.047 [2024-10-13 17:42:35.177662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.047 [2024-10-13 17:42:35.177670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.047 [2024-10-13 17:42:35.177678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.047 [2024-10-13 17:42:35.177686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.047 [2024-10-13 17:42:35.177694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.047 [2024-10-13 17:42:35.177701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:27.047 [2024-10-13 17:42:35.178173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbad70 (9): Bad file descriptor 00:32:27.047 [2024-10-13 17:42:35.179185] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:27.047 [2024-10-13 17:42:35.179195] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:27.047 17:42:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.047 17:42:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.047 17:42:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.988 17:42:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.988 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.988 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.989 17:42:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.989 17:42:36 -- common/autotest_common.sh@10 -- # set +x 00:32:27.989 17:42:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.989 17:42:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.989 17:42:36 -- common/autotest_common.sh@10 -- # set +x 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.989 17:42:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:27.989 17:42:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.929 [2024-10-13 17:42:37.228996] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:28.929 [2024-10-13 17:42:37.229017] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:28.929 [2024-10-13 17:42:37.229030] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:28.929 [2024-10-13 17:42:37.358447] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:28.929 17:42:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.929 17:42:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.929 17:42:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.929 17:42:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.929 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:28.929 17:42:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.929 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:32:28.929 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:29.195 17:42:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:29.195 17:42:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.195 [2024-10-13 17:42:37.541538] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:29.195 [2024-10-13 17:42:37.541574] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:29.195 [2024-10-13 17:42:37.541592] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:29.195 [2024-10-13 17:42:37.541607] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:29.195 [2024-10-13 17:42:37.541614] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:29.195 [2024-10-13 17:42:37.588751] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcc9340 was disconnected and freed. delete nvme_qpair. 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.204 17:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.204 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.204 17:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3396282 00:32:30.204 17:42:38 -- common/autotest_common.sh@926 -- # '[' -z 3396282 ']' 00:32:30.204 17:42:38 -- common/autotest_common.sh@930 -- # kill -0 3396282 00:32:30.204 17:42:38 -- common/autotest_common.sh@931 -- # uname 00:32:30.204 17:42:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:30.204 17:42:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3396282 00:32:30.204 17:42:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:30.204 17:42:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:30.204 17:42:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3396282' 00:32:30.204 killing process with pid 3396282 00:32:30.204 17:42:38 -- common/autotest_common.sh@945 -- # kill 3396282 00:32:30.204 17:42:38 -- common/autotest_common.sh@950 -- # wait 3396282 00:32:30.204 17:42:38 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:30.204 17:42:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:30.204 17:42:38 -- nvmf/common.sh@116 -- # sync 00:32:30.204 17:42:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:30.204 17:42:38 -- nvmf/common.sh@119 -- # set +e 00:32:30.204 17:42:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:30.204 17:42:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:30.204 rmmod nvme_tcp 00:32:30.204 rmmod nvme_fabrics 00:32:30.465 rmmod nvme_keyring 00:32:30.465 17:42:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:30.465 17:42:38 -- nvmf/common.sh@123 -- # set -e 00:32:30.465 17:42:38 -- nvmf/common.sh@124 -- # return 0 00:32:30.465 17:42:38 -- nvmf/common.sh@477 -- # '[' -n 3396145 ']' 00:32:30.465 17:42:38 -- nvmf/common.sh@478 -- # killprocess 3396145 00:32:30.465 17:42:38 -- common/autotest_common.sh@926 -- # '[' -z 3396145 ']' 00:32:30.465 17:42:38 -- common/autotest_common.sh@930 -- # kill -0 3396145 00:32:30.465 17:42:38 -- common/autotest_common.sh@931 -- # uname 00:32:30.465 17:42:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:30.465 17:42:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3396145 00:32:30.465 17:42:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:30.465 17:42:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:30.465 17:42:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3396145' 00:32:30.465 killing process with pid 3396145 00:32:30.465 17:42:38 -- common/autotest_common.sh@945 -- # kill 3396145 00:32:30.465 17:42:38 -- common/autotest_common.sh@950 -- # wait 3396145 00:32:30.465 17:42:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:30.465 17:42:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:30.465 17:42:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:30.465 17:42:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:30.465 17:42:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:30.465 17:42:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.465 17:42:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.465 17:42:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.009 17:42:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:33.009 00:32:33.009 real 0m23.213s 00:32:33.009 user 0m26.416s 00:32:33.009 sys 0m6.823s 00:32:33.009 17:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:33.009 17:42:41 -- common/autotest_common.sh@10 -- # set +x 00:32:33.009 ************************************ 00:32:33.009 END TEST nvmf_discovery_remove_ifc 00:32:33.009 ************************************ 00:32:33.009 17:42:41 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:32:33.009 17:42:41 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:33.009 17:42:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:33.009 17:42:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:33.009 17:42:41 -- common/autotest_common.sh@10 -- # set +x 00:32:33.009 ************************************ 00:32:33.009 START TEST nvmf_digest 00:32:33.009 ************************************ 00:32:33.009 17:42:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:33.009 * Looking for test storage... 00:32:33.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.009 17:42:41 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.009 17:42:41 -- nvmf/common.sh@7 -- # uname -s 00:32:33.009 17:42:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.009 17:42:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.009 17:42:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.009 17:42:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.009 17:42:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.009 17:42:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.009 17:42:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.009 17:42:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.009 17:42:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.009 17:42:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.009 17:42:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:33.009 17:42:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:33.009 17:42:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.009 17:42:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.009 17:42:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.009 17:42:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.009 17:42:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.009 17:42:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.009 17:42:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.009 17:42:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.009 17:42:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.009 17:42:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.009 17:42:41 -- paths/export.sh@5 -- # export PATH 00:32:33.009 17:42:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.009 17:42:41 -- nvmf/common.sh@46 -- # : 0 00:32:33.009 17:42:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:33.009 17:42:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:33.009 17:42:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:33.009 17:42:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.009 17:42:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.009 17:42:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:33.009 17:42:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:33.009 17:42:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:33.009 17:42:41 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:33.009 17:42:41 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:33.009 17:42:41 -- host/digest.sh@16 -- # runtime=2 00:32:33.009 17:42:41 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:32:33.009 17:42:41 -- host/digest.sh@132 -- # nvmftestinit 00:32:33.009 17:42:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:33.009 17:42:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.009 17:42:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:33.009 17:42:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:33.009 17:42:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:33.009 17:42:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.009 17:42:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:33.009 17:42:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.009 17:42:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:33.009 17:42:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:33.009 17:42:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:33.009 17:42:41 -- common/autotest_common.sh@10 -- # set +x 00:32:41.157 17:42:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:41.157 17:42:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:41.157 17:42:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:41.157 17:42:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:41.157 17:42:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:41.157 17:42:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:41.157 17:42:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:41.157 17:42:48 -- nvmf/common.sh@294 -- # net_devs=() 00:32:41.157 17:42:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:41.157 17:42:48 -- nvmf/common.sh@295 -- # e810=() 00:32:41.157 17:42:48 -- nvmf/common.sh@295 -- # local -ga e810 00:32:41.157 17:42:48 -- nvmf/common.sh@296 -- # x722=() 00:32:41.157 17:42:48 -- nvmf/common.sh@296 -- # local -ga x722 00:32:41.157 17:42:48 -- nvmf/common.sh@297 -- # mlx=() 00:32:41.157 17:42:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:41.157 17:42:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.157 17:42:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.158 17:42:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.158 17:42:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:41.158 17:42:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:41.158 17:42:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:41.158 17:42:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:41.158 17:42:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:41.158 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:41.158 17:42:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:41.158 17:42:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:41.158 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:41.158 17:42:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:41.158 17:42:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:41.158 17:42:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.158 17:42:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:41.158 17:42:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.158 17:42:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:41.158 Found net devices under 0000:31:00.0: cvl_0_0 00:32:41.158 17:42:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.158 17:42:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:41.158 17:42:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.158 17:42:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:41.158 17:42:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.158 17:42:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:41.158 Found net devices under 0000:31:00.1: cvl_0_1 00:32:41.158 17:42:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.158 17:42:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:41.158 17:42:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:41.158 17:42:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:41.158 17:42:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.158 17:42:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.158 17:42:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.158 17:42:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:41.158 17:42:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.158 17:42:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.158 17:42:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:41.158 17:42:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.158 17:42:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.158 17:42:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:41.158 17:42:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:41.158 17:42:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.158 17:42:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.158 17:42:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.158 17:42:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.158 17:42:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:41.158 17:42:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.158 17:42:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.158 17:42:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.158 17:42:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:41.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:32:41.158 00:32:41.158 --- 10.0.0.2 ping statistics --- 00:32:41.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.158 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:32:41.158 17:42:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:32:41.158 00:32:41.158 --- 10.0.0.1 ping statistics --- 00:32:41.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.158 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:32:41.158 17:42:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.158 17:42:48 -- nvmf/common.sh@410 -- # return 0 00:32:41.158 17:42:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:41.158 17:42:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.158 17:42:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:41.158 17:42:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.158 17:42:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:41.158 17:42:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:41.158 17:42:48 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:41.158 17:42:48 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:32:41.158 17:42:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:41.158 17:42:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:41.158 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 ************************************ 00:32:41.158 START TEST nvmf_digest_clean 00:32:41.158 ************************************ 00:32:41.158 17:42:48 -- common/autotest_common.sh@1104 -- # run_digest 00:32:41.158 17:42:48 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:32:41.158 17:42:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:41.158 17:42:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:41.158 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 17:42:48 -- nvmf/common.sh@469 -- # nvmfpid=3403028 00:32:41.158 17:42:48 -- nvmf/common.sh@470 -- # waitforlisten 3403028 00:32:41.158 17:42:48 -- common/autotest_common.sh@819 -- # '[' -z 3403028 ']' 00:32:41.158 17:42:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:41.158 17:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.158 17:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:41.158 17:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.158 17:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:41.158 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 [2024-10-13 17:42:48.659589] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:41.158 [2024-10-13 17:42:48.659646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.158 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.158 [2024-10-13 17:42:48.730799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.158 [2024-10-13 17:42:48.760108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:41.158 [2024-10-13 17:42:48.760225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.158 [2024-10-13 17:42:48.760234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.158 [2024-10-13 17:42:48.760243] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.158 [2024-10-13 17:42:48.760264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.158 17:42:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:41.158 17:42:48 -- common/autotest_common.sh@852 -- # return 0 00:32:41.158 17:42:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:41.158 17:42:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:41.158 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 17:42:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.158 17:42:48 -- host/digest.sh@120 -- # common_target_config 00:32:41.158 17:42:48 -- host/digest.sh@43 -- # rpc_cmd 00:32:41.158 17:42:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.158 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 null0 00:32:41.158 [2024-10-13 17:42:48.963294] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.158 [2024-10-13 17:42:48.987498] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.158 17:42:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.158 17:42:48 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:32:41.158 17:42:48 -- host/digest.sh@77 -- # local rw bs qd 00:32:41.158 17:42:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:41.158 17:42:48 -- host/digest.sh@80 -- # rw=randread 00:32:41.158 17:42:48 -- host/digest.sh@80 -- # bs=4096 00:32:41.158 17:42:48 -- host/digest.sh@80 -- # qd=128 00:32:41.158 17:42:48 -- host/digest.sh@82 -- # bperfpid=3403154 00:32:41.158 17:42:48 -- host/digest.sh@83 -- # waitforlisten 3403154 /var/tmp/bperf.sock 00:32:41.158 17:42:48 -- common/autotest_common.sh@819 -- # '[' -z 3403154 ']' 00:32:41.158 17:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:41.158 17:42:48 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:41.158 17:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:41.158 17:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:41.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:41.158 17:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:41.158 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 [2024-10-13 17:42:49.040176] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:41.158 [2024-10-13 17:42:49.040221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403154 ] 00:32:41.158 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.158 [2024-10-13 17:42:49.116382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.158 [2024-10-13 17:42:49.145244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.419 17:42:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:41.419 17:42:49 -- common/autotest_common.sh@852 -- # return 0 00:32:41.419 17:42:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:41.419 17:42:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:41.419 17:42:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:41.679 17:42:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.679 17:42:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.940 nvme0n1 00:32:41.940 17:42:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:41.940 17:42:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.940 Running I/O for 2 seconds... 00:32:44.483 00:32:44.483 Latency(us) 00:32:44.483 [2024-10-13T15:42:53.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.483 [2024-10-13T15:42:53.007Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:44.483 nvme0n1 : 2.00 16232.66 63.41 0.00 0.00 7877.98 3317.76 22828.37 00:32:44.483 [2024-10-13T15:42:53.007Z] =================================================================================================================== 00:32:44.483 [2024-10-13T15:42:53.007Z] Total : 16232.66 63.41 0.00 0.00 7877.98 3317.76 22828.37 00:32:44.483 0 00:32:44.483 17:42:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:44.483 17:42:52 -- host/digest.sh@92 -- # get_accel_stats 00:32:44.483 17:42:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:44.483 17:42:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:44.483 | select(.opcode=="crc32c") 00:32:44.483 | "\(.module_name) \(.executed)"' 00:32:44.483 17:42:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:44.483 17:42:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:44.483 17:42:52 -- host/digest.sh@93 -- # exp_module=software 00:32:44.483 17:42:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:44.483 17:42:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:44.483 17:42:52 -- host/digest.sh@97 -- # killprocess 3403154 00:32:44.483 17:42:52 -- common/autotest_common.sh@926 -- # '[' -z 3403154 ']' 00:32:44.483 17:42:52 -- common/autotest_common.sh@930 -- # kill -0 3403154 00:32:44.483 17:42:52 -- common/autotest_common.sh@931 -- # uname 00:32:44.483 17:42:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:44.483 17:42:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3403154 00:32:44.483 17:42:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:44.483 17:42:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:44.483 17:42:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3403154' 00:32:44.483 killing process with pid 3403154 00:32:44.483 17:42:52 -- common/autotest_common.sh@945 -- # kill 3403154 00:32:44.483 Received shutdown signal, test time was about 2.000000 seconds 00:32:44.483 00:32:44.483 Latency(us) 00:32:44.483 [2024-10-13T15:42:53.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.483 [2024-10-13T15:42:53.007Z] =================================================================================================================== 00:32:44.483 [2024-10-13T15:42:53.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.483 17:42:52 -- common/autotest_common.sh@950 -- # wait 3403154 00:32:44.483 17:42:52 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:32:44.483 17:42:52 -- host/digest.sh@77 -- # local rw bs qd 00:32:44.483 17:42:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:44.483 17:42:52 -- host/digest.sh@80 -- # rw=randread 00:32:44.483 17:42:52 -- host/digest.sh@80 -- # bs=131072 00:32:44.483 17:42:52 -- host/digest.sh@80 -- # qd=16 00:32:44.483 17:42:52 -- host/digest.sh@82 -- # bperfpid=3403847 00:32:44.483 17:42:52 -- host/digest.sh@83 -- # waitforlisten 3403847 /var/tmp/bperf.sock 00:32:44.483 17:42:52 -- common/autotest_common.sh@819 -- # '[' -z 3403847 ']' 00:32:44.483 17:42:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.483 17:42:52 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:44.483 17:42:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:44.483 17:42:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.483 17:42:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:44.483 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:44.483 [2024-10-13 17:42:52.897890] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:44.484 [2024-10-13 17:42:52.897944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403847 ] 00:32:44.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:44.484 Zero copy mechanism will not be used. 00:32:44.484 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.484 [2024-10-13 17:42:52.975141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.484 [2024-10-13 17:42:53.004525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.423 17:42:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:45.423 17:42:53 -- common/autotest_common.sh@852 -- # return 0 00:32:45.423 17:42:53 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:45.423 17:42:53 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:45.423 17:42:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:45.423 17:42:53 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.423 17:42:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.682 nvme0n1 00:32:45.682 17:42:54 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:45.682 17:42:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.953 Zero copy mechanism will not be used. 00:32:45.953 Running I/O for 2 seconds... 00:32:47.865 00:32:47.865 Latency(us) 00:32:47.865 [2024-10-13T15:42:56.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.865 [2024-10-13T15:42:56.389Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:47.865 nvme0n1 : 2.00 3407.55 425.94 0.00 0.00 4691.49 669.01 12451.84 00:32:47.865 [2024-10-13T15:42:56.389Z] =================================================================================================================== 00:32:47.865 [2024-10-13T15:42:56.389Z] Total : 3407.55 425.94 0.00 0.00 4691.49 669.01 12451.84 00:32:47.865 0 00:32:47.865 17:42:56 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:47.865 17:42:56 -- host/digest.sh@92 -- # get_accel_stats 00:32:47.865 17:42:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:47.865 17:42:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:47.865 | select(.opcode=="crc32c") 00:32:47.865 | "\(.module_name) \(.executed)"' 00:32:47.865 17:42:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:48.126 17:42:56 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:48.126 17:42:56 -- host/digest.sh@93 -- # exp_module=software 00:32:48.126 17:42:56 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:48.126 17:42:56 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:48.126 17:42:56 -- host/digest.sh@97 -- # killprocess 3403847 00:32:48.126 17:42:56 -- common/autotest_common.sh@926 -- # '[' -z 3403847 ']' 00:32:48.126 17:42:56 -- common/autotest_common.sh@930 -- # kill -0 3403847 00:32:48.126 17:42:56 -- common/autotest_common.sh@931 -- # uname 00:32:48.126 17:42:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:48.126 17:42:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3403847 00:32:48.126 17:42:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:48.126 17:42:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:48.126 17:42:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3403847' 00:32:48.126 killing process with pid 3403847 00:32:48.126 17:42:56 -- common/autotest_common.sh@945 -- # kill 3403847 00:32:48.126 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.126 00:32:48.126 Latency(us) 00:32:48.126 [2024-10-13T15:42:56.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.126 [2024-10-13T15:42:56.650Z] =================================================================================================================== 00:32:48.126 [2024-10-13T15:42:56.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.126 17:42:56 -- common/autotest_common.sh@950 -- # wait 3403847 00:32:48.126 17:42:56 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:32:48.126 17:42:56 -- host/digest.sh@77 -- # local rw bs qd 00:32:48.126 17:42:56 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:48.126 17:42:56 -- host/digest.sh@80 -- # rw=randwrite 00:32:48.126 17:42:56 -- host/digest.sh@80 -- # bs=4096 00:32:48.126 17:42:56 -- host/digest.sh@80 -- # qd=128 00:32:48.126 17:42:56 -- host/digest.sh@82 -- # bperfpid=3404538 00:32:48.126 17:42:56 -- host/digest.sh@83 -- # waitforlisten 3404538 /var/tmp/bperf.sock 00:32:48.126 17:42:56 -- common/autotest_common.sh@819 -- # '[' -z 3404538 ']' 00:32:48.126 17:42:56 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:48.126 17:42:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:48.126 17:42:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:48.126 17:42:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:48.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:48.126 17:42:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:48.126 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:32:48.126 [2024-10-13 17:42:56.639662] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:48.126 [2024-10-13 17:42:56.639717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404538 ] 00:32:48.398 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.398 [2024-10-13 17:42:56.718240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.398 [2024-10-13 17:42:56.744916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.982 17:42:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:48.982 17:42:57 -- common/autotest_common.sh@852 -- # return 0 00:32:48.982 17:42:57 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:48.982 17:42:57 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:48.982 17:42:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:49.244 17:42:57 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.244 17:42:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.505 nvme0n1 00:32:49.505 17:42:57 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:49.505 17:42:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:49.766 Running I/O for 2 seconds... 00:32:51.682 00:32:51.682 Latency(us) 00:32:51.682 [2024-10-13T15:43:00.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.682 [2024-10-13T15:43:00.206Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.682 nvme0n1 : 2.01 22786.06 89.01 0.00 0.00 5611.99 2007.04 10158.08 00:32:51.682 [2024-10-13T15:43:00.206Z] =================================================================================================================== 00:32:51.682 [2024-10-13T15:43:00.206Z] Total : 22786.06 89.01 0.00 0.00 5611.99 2007.04 10158.08 00:32:51.682 0 00:32:51.682 17:43:00 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:51.682 17:43:00 -- host/digest.sh@92 -- # get_accel_stats 00:32:51.682 17:43:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:51.682 17:43:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:51.682 | select(.opcode=="crc32c") 00:32:51.682 | "\(.module_name) \(.executed)"' 00:32:51.682 17:43:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:51.944 17:43:00 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:51.944 17:43:00 -- host/digest.sh@93 -- # exp_module=software 00:32:51.944 17:43:00 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:51.944 17:43:00 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:51.944 17:43:00 -- host/digest.sh@97 -- # killprocess 3404538 00:32:51.944 17:43:00 -- common/autotest_common.sh@926 -- # '[' -z 3404538 ']' 00:32:51.944 17:43:00 -- common/autotest_common.sh@930 -- # kill -0 3404538 00:32:51.944 17:43:00 -- common/autotest_common.sh@931 -- # uname 00:32:51.944 17:43:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:51.944 17:43:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3404538 00:32:51.944 17:43:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:51.944 17:43:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:51.944 17:43:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3404538' 00:32:51.944 killing process with pid 3404538 00:32:51.944 17:43:00 -- common/autotest_common.sh@945 -- # kill 3404538 00:32:51.944 Received shutdown signal, test time was about 2.000000 seconds 00:32:51.944 00:32:51.944 Latency(us) 00:32:51.944 [2024-10-13T15:43:00.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.944 [2024-10-13T15:43:00.468Z] =================================================================================================================== 00:32:51.944 [2024-10-13T15:43:00.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.944 17:43:00 -- common/autotest_common.sh@950 -- # wait 3404538 00:32:51.944 17:43:00 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:51.944 17:43:00 -- host/digest.sh@77 -- # local rw bs qd 00:32:51.944 17:43:00 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:51.944 17:43:00 -- host/digest.sh@80 -- # rw=randwrite 00:32:51.944 17:43:00 -- host/digest.sh@80 -- # bs=131072 00:32:51.944 17:43:00 -- host/digest.sh@80 -- # qd=16 00:32:51.944 17:43:00 -- host/digest.sh@82 -- # bperfpid=3405232 00:32:51.944 17:43:00 -- host/digest.sh@83 -- # waitforlisten 3405232 /var/tmp/bperf.sock 00:32:51.944 17:43:00 -- common/autotest_common.sh@819 -- # '[' -z 3405232 ']' 00:32:51.944 17:43:00 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:51.944 17:43:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:51.944 17:43:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:51.944 17:43:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:51.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:51.944 17:43:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:51.944 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:52.205 [2024-10-13 17:43:00.471457] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:52.205 [2024-10-13 17:43:00.471513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405232 ] 00:32:52.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:52.205 Zero copy mechanism will not be used. 00:32:52.205 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.205 [2024-10-13 17:43:00.548905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.205 [2024-10-13 17:43:00.575067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.777 17:43:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:52.777 17:43:01 -- common/autotest_common.sh@852 -- # return 0 00:32:52.777 17:43:01 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:52.777 17:43:01 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:52.777 17:43:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:53.039 17:43:01 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.039 17:43:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.299 nvme0n1 00:32:53.299 17:43:01 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:53.299 17:43:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:53.560 Zero copy mechanism will not be used. 00:32:53.560 Running I/O for 2 seconds... 00:32:55.473 00:32:55.473 Latency(us) 00:32:55.473 [2024-10-13T15:43:03.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.473 [2024-10-13T15:43:03.997Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:55.473 nvme0n1 : 2.00 6322.75 790.34 0.00 0.00 2527.32 1467.73 13107.20 00:32:55.473 [2024-10-13T15:43:03.997Z] =================================================================================================================== 00:32:55.473 [2024-10-13T15:43:03.997Z] Total : 6322.75 790.34 0.00 0.00 2527.32 1467.73 13107.20 00:32:55.473 0 00:32:55.473 17:43:03 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:55.473 17:43:03 -- host/digest.sh@92 -- # get_accel_stats 00:32:55.473 17:43:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:55.473 17:43:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:55.473 | select(.opcode=="crc32c") 00:32:55.473 | "\(.module_name) \(.executed)"' 00:32:55.473 17:43:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:55.734 17:43:04 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:55.734 17:43:04 -- host/digest.sh@93 -- # exp_module=software 00:32:55.734 17:43:04 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:55.734 17:43:04 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:55.734 17:43:04 -- host/digest.sh@97 -- # killprocess 3405232 00:32:55.734 17:43:04 -- common/autotest_common.sh@926 -- # '[' -z 3405232 ']' 00:32:55.734 17:43:04 -- common/autotest_common.sh@930 -- # kill -0 3405232 00:32:55.734 17:43:04 -- common/autotest_common.sh@931 -- # uname 00:32:55.734 17:43:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:55.734 17:43:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3405232 00:32:55.734 17:43:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:55.734 17:43:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:55.734 17:43:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3405232' 00:32:55.734 killing process with pid 3405232 00:32:55.734 17:43:04 -- common/autotest_common.sh@945 -- # kill 3405232 00:32:55.734 Received shutdown signal, test time was about 2.000000 seconds 00:32:55.734 00:32:55.734 Latency(us) 00:32:55.734 [2024-10-13T15:43:04.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.734 [2024-10-13T15:43:04.258Z] =================================================================================================================== 00:32:55.734 [2024-10-13T15:43:04.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:55.734 17:43:04 -- common/autotest_common.sh@950 -- # wait 3405232 00:32:55.734 17:43:04 -- host/digest.sh@126 -- # killprocess 3403028 00:32:55.734 17:43:04 -- common/autotest_common.sh@926 -- # '[' -z 3403028 ']' 00:32:55.734 17:43:04 -- common/autotest_common.sh@930 -- # kill -0 3403028 00:32:55.734 17:43:04 -- common/autotest_common.sh@931 -- # uname 00:32:55.734 17:43:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:55.734 17:43:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3403028 00:32:55.995 17:43:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:55.995 17:43:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:55.995 17:43:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3403028' 00:32:55.995 killing process with pid 3403028 00:32:55.995 17:43:04 -- common/autotest_common.sh@945 -- # kill 3403028 00:32:55.995 17:43:04 -- common/autotest_common.sh@950 -- # wait 3403028 00:32:55.995 00:32:55.995 real 0m15.814s 00:32:55.995 user 0m31.554s 00:32:55.995 sys 0m3.609s 00:32:55.995 17:43:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.995 17:43:04 -- common/autotest_common.sh@10 -- # set +x 00:32:55.995 ************************************ 00:32:55.995 END TEST nvmf_digest_clean 00:32:55.995 ************************************ 00:32:55.995 17:43:04 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:55.995 17:43:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:55.995 17:43:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:55.995 17:43:04 -- common/autotest_common.sh@10 -- # set +x 00:32:55.995 ************************************ 00:32:55.995 START TEST nvmf_digest_error 00:32:55.995 ************************************ 00:32:55.996 17:43:04 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:55.996 17:43:04 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:55.996 17:43:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:55.996 17:43:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:55.996 17:43:04 -- common/autotest_common.sh@10 -- # set +x 00:32:55.996 17:43:04 -- nvmf/common.sh@469 -- # nvmfpid=3406060 00:32:55.996 17:43:04 -- nvmf/common.sh@470 -- # waitforlisten 3406060 00:32:55.996 17:43:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:55.996 17:43:04 -- common/autotest_common.sh@819 -- # '[' -z 3406060 ']' 00:32:55.996 17:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.996 17:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:55.996 17:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.996 17:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:55.996 17:43:04 -- common/autotest_common.sh@10 -- # set +x 00:32:55.996 [2024-10-13 17:43:04.516013] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:55.996 [2024-10-13 17:43:04.516076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.257 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.257 [2024-10-13 17:43:04.584274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.257 [2024-10-13 17:43:04.614753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:56.257 [2024-10-13 17:43:04.614876] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.257 [2024-10-13 17:43:04.614885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.257 [2024-10-13 17:43:04.614892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.257 [2024-10-13 17:43:04.614911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.829 17:43:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:56.829 17:43:05 -- common/autotest_common.sh@852 -- # return 0 00:32:56.829 17:43:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:56.829 17:43:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:56.829 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:32:56.829 17:43:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.829 17:43:05 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:56.829 17:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.829 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:32:56.829 [2024-10-13 17:43:05.332989] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:56.829 17:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.829 17:43:05 -- host/digest.sh@104 -- # common_target_config 00:32:56.829 17:43:05 -- host/digest.sh@43 -- # rpc_cmd 00:32:56.829 17:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.829 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:32:57.089 null0 00:32:57.089 [2024-10-13 17:43:05.407334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.089 [2024-10-13 17:43:05.431548] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.089 17:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.089 17:43:05 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:57.089 17:43:05 -- host/digest.sh@54 -- # local rw bs qd 00:32:57.089 17:43:05 -- host/digest.sh@56 -- # rw=randread 00:32:57.089 17:43:05 -- host/digest.sh@56 -- # bs=4096 00:32:57.089 17:43:05 -- host/digest.sh@56 -- # qd=128 00:32:57.089 17:43:05 -- host/digest.sh@58 -- # bperfpid=3406299 00:32:57.089 17:43:05 -- host/digest.sh@60 -- # waitforlisten 3406299 /var/tmp/bperf.sock 00:32:57.089 17:43:05 -- common/autotest_common.sh@819 -- # '[' -z 3406299 ']' 00:32:57.089 17:43:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:57.089 17:43:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.089 17:43:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:57.089 17:43:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.089 17:43:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:57.089 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:32:57.089 [2024-10-13 17:43:05.484376] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:57.089 [2024-10-13 17:43:05.484422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406299 ] 00:32:57.089 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.089 [2024-10-13 17:43:05.561650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.089 [2024-10-13 17:43:05.588469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.030 17:43:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:58.030 17:43:06 -- common/autotest_common.sh@852 -- # return 0 00:32:58.030 17:43:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:58.030 17:43:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:58.030 17:43:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:58.030 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.030 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:32:58.030 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.030 17:43:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.030 17:43:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.291 nvme0n1 00:32:58.291 17:43:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:58.291 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.291 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:32:58.291 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.291 17:43:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:58.291 17:43:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.291 Running I/O for 2 seconds... 00:32:58.291 [2024-10-13 17:43:06.792545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.291 [2024-10-13 17:43:06.792575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.291 [2024-10-13 17:43:06.792583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.291 [2024-10-13 17:43:06.807368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.291 [2024-10-13 17:43:06.807387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.291 [2024-10-13 17:43:06.807394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.821783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.821801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.821813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.836627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.836645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.836652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.851595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.851620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.866698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.866716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.866722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.881367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.881384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.881391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.895901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.895919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.895925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.911267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.911284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.911291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.925976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.925993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.926000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.940969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.940987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.940993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.955681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.955698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.955705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.970401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.970418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.970425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.984997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.985014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.985021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:06.999893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:06.999909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:06.999916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:07.014936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:07.014954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:07.014960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:07.029568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.552 [2024-10-13 17:43:07.029585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.552 [2024-10-13 17:43:07.029591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.552 [2024-10-13 17:43:07.044377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.553 [2024-10-13 17:43:07.044394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.553 [2024-10-13 17:43:07.044400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.553 [2024-10-13 17:43:07.058519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.553 [2024-10-13 17:43:07.058536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.553 [2024-10-13 17:43:07.058543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.553 [2024-10-13 17:43:07.073121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.553 [2024-10-13 17:43:07.073139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.553 [2024-10-13 17:43:07.073149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.088573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.088590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.088596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.103224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.103241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.103248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.117952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.117969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.117975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.132613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.132630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.132636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.147462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.147479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.147485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.161884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.161901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.161907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.176559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.176577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.176583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.191494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.191511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.191517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.206424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.206447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.206453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.220909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.220926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.220932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.235746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.235763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.235769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.251070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.251087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.251093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.265622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.265639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.265645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.280161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.280177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.280184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.294910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.294927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.294933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.308720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.308737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.308743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.814 [2024-10-13 17:43:07.323407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:58.814 [2024-10-13 17:43:07.323424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.814 [2024-10-13 17:43:07.323431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.338811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.338828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.338835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.353185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.353201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.367963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.367979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.367986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.382688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.382705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.397739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.397757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.397763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.412451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.412468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.412474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.427731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.427748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.075 [2024-10-13 17:43:07.427754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.075 [2024-10-13 17:43:07.442262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.075 [2024-10-13 17:43:07.442278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.442284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.456607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.456624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.456634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.471844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.471860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.471866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.486435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.486451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.486457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.500895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.500912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.500918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.515179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.515195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.515201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.530472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.530488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.530495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.544892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.544908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.559069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.559086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.559092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.574102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.574118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.574125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.076 [2024-10-13 17:43:07.589289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.076 [2024-10-13 17:43:07.589309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.076 [2024-10-13 17:43:07.589315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.604655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.604671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.604678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.619763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.619780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.619787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.634384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.634400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.634407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.648985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.649002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.649009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.663983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.663999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.664006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.678612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.678628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.678634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.693383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.693399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.693406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.708049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.708068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.708075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.722750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.722766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.722773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.737405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.737421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.737427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.758090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.758107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.758113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.773181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.773197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.773204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.787895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.787912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.787919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.802300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.802317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.802323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.810629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.810645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.810651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.824547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.824565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.824572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.839782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.839799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.839808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.337 [2024-10-13 17:43:07.854343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.337 [2024-10-13 17:43:07.854359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.337 [2024-10-13 17:43:07.854366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.869446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.869463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.869469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.884107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.884125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.884131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.899410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.899426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.899433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.920039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.920055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.920065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.934577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.934594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.934600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.949084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.949101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.949107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.964278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.964295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.964301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.978964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.978981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.978987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:07.994001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:07.994018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:07.994024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.008661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.008678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.008685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.023529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.023546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.023553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.038616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.038632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.038638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.053229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.053246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.053253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.066860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.066876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.066882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.081660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.081683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.096324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.096340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.096349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.111010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.111026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.111033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.604 [2024-10-13 17:43:08.126156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.604 [2024-10-13 17:43:08.126173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.604 [2024-10-13 17:43:08.126179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.141075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.141091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.141098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.155837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.155854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.155861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.170496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.170518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.185495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.185519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.200294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.200310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.200317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.214773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.214790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.214796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.229544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.229564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.229570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.244947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.244964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.244970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.259755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.259772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.259778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.274308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.274324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.274331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.288899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.288916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.288922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.303757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.303774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.303781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.317516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.317533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.317540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.329660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.329677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.329684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.866 [2024-10-13 17:43:08.341563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.866 [2024-10-13 17:43:08.341580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.866 [2024-10-13 17:43:08.341587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.867 [2024-10-13 17:43:08.351670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.867 [2024-10-13 17:43:08.351687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.867 [2024-10-13 17:43:08.351693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.867 [2024-10-13 17:43:08.363338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.867 [2024-10-13 17:43:08.363355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.867 [2024-10-13 17:43:08.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.867 [2024-10-13 17:43:08.378873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:32:59.867 [2024-10-13 17:43:08.378890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.867 [2024-10-13 17:43:08.378896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.393525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.393542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.393549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.409030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.409047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.409053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.423351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.423368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.423374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.438279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.438296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.438302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.452849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.452866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.452872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.467506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.467522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.467532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.481954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.481971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.481977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.496534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.496551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.496558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.511459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.511476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.511482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.531991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.532008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.532014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.546970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.546986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.546992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.561503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.561521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.561527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.575881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.575897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.575903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.585044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.585066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.585073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.599661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.599680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.599686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.614988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.615005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.615011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.629694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.629711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.629718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.128 [2024-10-13 17:43:08.644572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.128 [2024-10-13 17:43:08.644588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.128 [2024-10-13 17:43:08.644595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.659419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.659435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.674311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.674327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.674333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.688882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.688899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.688905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.703702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.703719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.703726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.718274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.718291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.718297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.732772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.732789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.732795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.747639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.747656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.747662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 [2024-10-13 17:43:08.762621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x877b00) 00:33:00.390 [2024-10-13 17:43:08.762637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.390 [2024-10-13 17:43:08.762643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.390 00:33:00.390 Latency(us) 00:33:00.390 [2024-10-13T15:43:08.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.390 [2024-10-13T15:43:08.914Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:00.390 nvme0n1 : 2.00 17304.26 67.59 0.00 0.00 7391.10 1911.47 22173.01 00:33:00.390 [2024-10-13T15:43:08.914Z] =================================================================================================================== 00:33:00.390 [2024-10-13T15:43:08.914Z] Total : 17304.26 67.59 0.00 0.00 7391.10 1911.47 22173.01 00:33:00.390 0 00:33:00.390 17:43:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:00.390 17:43:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:00.390 17:43:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:00.390 | .driver_specific 00:33:00.390 | .nvme_error 00:33:00.390 | .status_code 00:33:00.390 | .command_transient_transport_error' 00:33:00.390 17:43:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:00.651 17:43:08 -- host/digest.sh@71 -- # (( 135 > 0 )) 00:33:00.651 17:43:08 -- host/digest.sh@73 -- # killprocess 3406299 00:33:00.651 17:43:08 -- common/autotest_common.sh@926 -- # '[' -z 3406299 ']' 00:33:00.651 17:43:08 -- common/autotest_common.sh@930 -- # kill -0 3406299 00:33:00.651 17:43:08 -- common/autotest_common.sh@931 -- # uname 00:33:00.651 17:43:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:00.651 17:43:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3406299 00:33:00.651 17:43:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:00.651 17:43:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:00.651 17:43:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3406299' 00:33:00.651 killing process with pid 3406299 00:33:00.651 17:43:09 -- common/autotest_common.sh@945 -- # kill 3406299 00:33:00.651 Received shutdown signal, test time was about 2.000000 seconds 00:33:00.651 00:33:00.651 Latency(us) 00:33:00.651 [2024-10-13T15:43:09.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.651 [2024-10-13T15:43:09.175Z] =================================================================================================================== 00:33:00.651 [2024-10-13T15:43:09.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.651 17:43:09 -- common/autotest_common.sh@950 -- # wait 3406299 00:33:00.651 17:43:09 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:33:00.651 17:43:09 -- host/digest.sh@54 -- # local rw bs qd 00:33:00.651 17:43:09 -- host/digest.sh@56 -- # rw=randread 00:33:00.651 17:43:09 -- host/digest.sh@56 -- # bs=131072 00:33:00.651 17:43:09 -- host/digest.sh@56 -- # qd=16 00:33:00.651 17:43:09 -- host/digest.sh@58 -- # bperfpid=3406991 00:33:00.651 17:43:09 -- host/digest.sh@60 -- # waitforlisten 3406991 /var/tmp/bperf.sock 00:33:00.651 17:43:09 -- common/autotest_common.sh@819 -- # '[' -z 3406991 ']' 00:33:00.651 17:43:09 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:00.651 17:43:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.651 17:43:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:00.651 17:43:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.651 17:43:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:00.651 17:43:09 -- common/autotest_common.sh@10 -- # set +x 00:33:00.912 [2024-10-13 17:43:09.182909] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:00.912 [2024-10-13 17:43:09.182964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406991 ] 00:33:00.912 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.912 Zero copy mechanism will not be used. 00:33:00.912 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.912 [2024-10-13 17:43:09.259861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.912 [2024-10-13 17:43:09.286484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.484 17:43:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:01.484 17:43:09 -- common/autotest_common.sh@852 -- # return 0 00:33:01.484 17:43:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:01.484 17:43:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:01.745 17:43:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:01.745 17:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:01.745 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:33:01.745 17:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:01.745 17:43:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.745 17:43:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.006 nvme0n1 00:33:02.006 17:43:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:02.006 17:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.006 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:33:02.006 17:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.006 17:43:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:02.006 17:43:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:02.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:02.268 Zero copy mechanism will not be used. 00:33:02.268 Running I/O for 2 seconds... 00:33:02.268 [2024-10-13 17:43:10.572992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.573021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.573030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.578475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.578496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.578508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.585750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.585771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.585778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.594266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.594284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.594292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.603007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.603024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.603031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.609251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.609268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.609275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.616090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.616107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.616114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.625659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.625677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.625683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.635096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.635113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.635120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.643900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.643917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.643923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.648461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.648479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.648485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.651730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.651748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.651755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.654754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.654772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.654779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.657968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.657985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.657991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.660959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.660977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.660984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.666993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.667012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.667018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.675450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.675468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.675474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.683388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.683406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.683412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.690711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.690729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.690739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.268 [2024-10-13 17:43:10.699955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.268 [2024-10-13 17:43:10.699973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.268 [2024-10-13 17:43:10.699979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.706651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.706669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.713901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.713918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.713924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.719410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.719427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.719433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.727163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.727181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.727187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.738179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.738197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.738203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.750025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.750042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.750048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.756181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.756198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.756205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.766085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.766105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.766111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.773153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.773169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.773176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.778613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.778629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.778636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.781963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.781980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.269 [2024-10-13 17:43:10.786126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.269 [2024-10-13 17:43:10.786143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.269 [2024-10-13 17:43:10.786149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.792715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.792732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.792739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.796680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.796697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.796704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.800701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.800718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.800724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.805021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.805038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.805045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.812783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.812801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.812808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.817592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.817609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.817615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.820972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.820990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.820996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.829297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.829314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.829321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.834434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.834451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.834458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.844580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.844596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.844604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.853524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.853541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.853547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.531 [2024-10-13 17:43:10.862715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.531 [2024-10-13 17:43:10.862731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.531 [2024-10-13 17:43:10.862738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.873454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.873472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.873483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.880884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.880901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.880908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.887169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.887187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.887193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.894759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.894777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.894783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.901594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.901611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.901618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.908408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.908425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.908431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.916117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.916140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.923670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.923687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.923694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.929845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.929863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.929869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.934915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.934932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.934938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.939298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.939316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.947536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.947553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.947559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.953642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.953659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.953666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.957790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.957807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.957814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.960460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.960477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.960484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.967986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.968003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.973931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.973948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.973955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.984244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.984262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.984272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:10.995023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:10.995041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:10.995048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:11.006312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:11.006330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:11.006336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:11.017124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:11.017141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:11.017147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:11.026842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:11.026859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:11.026865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:11.038294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:11.038312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:11.038319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.532 [2024-10-13 17:43:11.049202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.532 [2024-10-13 17:43:11.049220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.532 [2024-10-13 17:43:11.049226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.794 [2024-10-13 17:43:11.059819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.794 [2024-10-13 17:43:11.059836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.794 [2024-10-13 17:43:11.059843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.794 [2024-10-13 17:43:11.069603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.794 [2024-10-13 17:43:11.069620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.794 [2024-10-13 17:43:11.069626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.794 [2024-10-13 17:43:11.078198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.794 [2024-10-13 17:43:11.078220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.794 [2024-10-13 17:43:11.078226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.794 [2024-10-13 17:43:11.081808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.794 [2024-10-13 17:43:11.081825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.794 [2024-10-13 17:43:11.081832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.794 [2024-10-13 17:43:11.084984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.794 [2024-10-13 17:43:11.085002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.794 [2024-10-13 17:43:11.085008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.794 [2024-10-13 17:43:11.088528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.088545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.088551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.097478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.097496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.097502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.106977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.106994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.117101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.117118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.117125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.128450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.128467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.128473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.138808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.138825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.138831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.148531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.148548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.148555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.156439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.156456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.156462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.165538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.165556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.165562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.174159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.174177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.174183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.182282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.182299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.182306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.188529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.188547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.188553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.198966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.198984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.198990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.208121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.208138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.208144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.216389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.216407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.216417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.224419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.224437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.224444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.228268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.228286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.228292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.232759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.232776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.232783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.236889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.236906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.236913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.244025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.244042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.244049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.252157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.252174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.252181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.257673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.257690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.257697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.264448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.264466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.264473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.267278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.267299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.267305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.275648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.275665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.275671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.282705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.282722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.282728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.291778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.291795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.291802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.302779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.302797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.302804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.795 [2024-10-13 17:43:11.314943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:02.795 [2024-10-13 17:43:11.314960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.795 [2024-10-13 17:43:11.314966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.058 [2024-10-13 17:43:11.327680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.058 [2024-10-13 17:43:11.327698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.058 [2024-10-13 17:43:11.327704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.340185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.340202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.340209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.349357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.349374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.349381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.353249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.353266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.353273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.360684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.360704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.360711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.368481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.368498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.372920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.372937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.372944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.376796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.376814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.376820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.384911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.384929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.384936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.397973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.397990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.397997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.401920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.401938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.401944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.404558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.404578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.404585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.410574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.410592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.410599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.416690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.416707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.416714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.419926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.419943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.419949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.424081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.424098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.424105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.429704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.429722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.429728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.435550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.435568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.435574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.441553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.441571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.441577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.445819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.445838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.445844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.454560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.454578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.454585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.458268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.458286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.458292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.462705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.462723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.462729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.468685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.468702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.468708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.471849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.471867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.471873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.475500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.475517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.475524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.482235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.482253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.482260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.485152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.485170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.485176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.488027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.059 [2024-10-13 17:43:11.488045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.059 [2024-10-13 17:43:11.488055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.059 [2024-10-13 17:43:11.491670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.491687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.491694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.494043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.494060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.494073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.499572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.499589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.499596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.501873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.501890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.501897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.509431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.509449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.509456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.514147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.514165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.514171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.519524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.519541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.519547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.524436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.524454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.524461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.529239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.529259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.529267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.534328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.534346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.534352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.541670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.541688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.541694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.549743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.549760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.549767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.556996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.557013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.557019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.565600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.565617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.565624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.060 [2024-10-13 17:43:11.577627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.060 [2024-10-13 17:43:11.577644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.060 [2024-10-13 17:43:11.577650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.583367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.583385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.583391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.587312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.587330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.587336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.591490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.591508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.591515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.597905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.597923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.597929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.602037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.602054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.602061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.612780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.612798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.612804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.616972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.616989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.616996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.624982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.625001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.625007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.631783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.631801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.631807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.639595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.639612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.639619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.650096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.650123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.662462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.662479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.662485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.671935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.671952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.671958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.678513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.678531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.678538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.686432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.686449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.322 [2024-10-13 17:43:11.686457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.322 [2024-10-13 17:43:11.689978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.322 [2024-10-13 17:43:11.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.690002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.693940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.693957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.693965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.697745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.697762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.697768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.701752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.701769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.701776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.706161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.706182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.706188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.712467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.712484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.712491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.720051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.720075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.729390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.729407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.729414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.737041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.737059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.737070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.744252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.744269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.744275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.752907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.752924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.752930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.760167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.760184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.760191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.770136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.770154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.770160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.776874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.776892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.776898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.782659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.782676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.782683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.791688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.791706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.791712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.795549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.795574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.799570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.799587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.799594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.805679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.805696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.811279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.811296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.811302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.819673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.819690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.819697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.825721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.825743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.825749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.833019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.833037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.833044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.323 [2024-10-13 17:43:11.841740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.323 [2024-10-13 17:43:11.841758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.323 [2024-10-13 17:43:11.841764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.849635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.849653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.849660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.862901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.862920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.862926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.872028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.872045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.872052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.881353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.881370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.891335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.891352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.891359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.901852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.901869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.901876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.911868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.911887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.911893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.918660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.918677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.918683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.925460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.925477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.925483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.935315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.935331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.935337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.943678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.943695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.585 [2024-10-13 17:43:11.943701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.585 [2024-10-13 17:43:11.952211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.585 [2024-10-13 17:43:11.952229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:11.952236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:11.959462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:11.959479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:11.959485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:11.967719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:11.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:11.967742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:11.976573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:11.976589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:11.976600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:11.985646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:11.985662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:11.985669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:11.996509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:11.996526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:11.996533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.006742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.006758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.006765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.019474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.019493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.019500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.029347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.029364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.029370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.039612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.039635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.049899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.049916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.049922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.060500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.060518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.060524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.070889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.070910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.070916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.080408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.080425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.080431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.090445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.090463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.090469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.586 [2024-10-13 17:43:12.100576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.586 [2024-10-13 17:43:12.100594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.586 [2024-10-13 17:43:12.100600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.112074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.112091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.112098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.121739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.121757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.121763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.132226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.132243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.132249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.141859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.141877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.141883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.150569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.150586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.150593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.156901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.156918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.156924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.161412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.161430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.161436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.164531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.164549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.164555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.167956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.167973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.167980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.171203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.171221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.171227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.174621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.174638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.174645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.177934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.177951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.177958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.186628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.186646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.186652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.196593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.196611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.196620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.206583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.206601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.206607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.216750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.216768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.216774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.226375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.226393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.226400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.233343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.233361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.233368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.241643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.241660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.241666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.249606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.249624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.249630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.257864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.257881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.848 [2024-10-13 17:43:12.257888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.848 [2024-10-13 17:43:12.266825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.848 [2024-10-13 17:43:12.266842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.266848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.275990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.276008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.276014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.286482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.286499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.286506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.295533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.295550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.295556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.303826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.303843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.303850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.312574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.312591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.312597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.320733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.320750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.320756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.324732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.324749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.324756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.328288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.328306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.328312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.332521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.332539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.332549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.341164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.341183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.349482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.349499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.349506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.353549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.353567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.353573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.360174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.360191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.360198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.363512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.363530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.363536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.366774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.366792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.366798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.849 [2024-10-13 17:43:12.370189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:03.849 [2024-10-13 17:43:12.370206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.849 [2024-10-13 17:43:12.370212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.373556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.373574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.373580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.382374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.382404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.387379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.387397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.387403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.395555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.395573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.395580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.404120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.404138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.404144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.412165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.412182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.412189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.420095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.420113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.420119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.428168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.428186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.428192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.438012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.438030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.438036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.448059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.448082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.448088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.458139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.458156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.458163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.469160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.469178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.469184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.479896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.479914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.479921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.489480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.489497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.489503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.498470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.498488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.498494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.507497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.507514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.507521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.517251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.517269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.517275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.527568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.527585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.527592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.539198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.539216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.539225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.111 [2024-10-13 17:43:12.551239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.111 [2024-10-13 17:43:12.551258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.111 [2024-10-13 17:43:12.551265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.112 [2024-10-13 17:43:12.563379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c6ca50) 00:33:04.112 [2024-10-13 17:43:12.563398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.112 [2024-10-13 17:43:12.563405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.112 00:33:04.112 Latency(us) 00:33:04.112 [2024-10-13T15:43:12.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.112 [2024-10-13T15:43:12.636Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:04.112 nvme0n1 : 2.01 4199.25 524.91 0.00 0.00 3807.13 474.45 13052.59 00:33:04.112 [2024-10-13T15:43:12.636Z] =================================================================================================================== 00:33:04.112 [2024-10-13T15:43:12.636Z] Total : 4199.25 524.91 0.00 0.00 3807.13 474.45 13052.59 00:33:04.112 0 00:33:04.112 17:43:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:04.112 17:43:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:04.112 17:43:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:04.112 17:43:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:04.112 | .driver_specific 00:33:04.112 | .nvme_error 00:33:04.112 | .status_code 00:33:04.112 | .command_transient_transport_error' 00:33:04.373 17:43:12 -- host/digest.sh@71 -- # (( 271 > 0 )) 00:33:04.373 17:43:12 -- host/digest.sh@73 -- # killprocess 3406991 00:33:04.373 17:43:12 -- common/autotest_common.sh@926 -- # '[' -z 3406991 ']' 00:33:04.373 17:43:12 -- common/autotest_common.sh@930 -- # kill -0 3406991 00:33:04.373 17:43:12 -- common/autotest_common.sh@931 -- # uname 00:33:04.373 17:43:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:04.373 17:43:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3406991 00:33:04.373 17:43:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:04.373 17:43:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:04.373 17:43:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3406991' 00:33:04.373 killing process with pid 3406991 00:33:04.373 17:43:12 -- common/autotest_common.sh@945 -- # kill 3406991 00:33:04.373 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.373 00:33:04.373 Latency(us) 00:33:04.373 [2024-10-13T15:43:12.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.373 [2024-10-13T15:43:12.897Z] =================================================================================================================== 00:33:04.373 [2024-10-13T15:43:12.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.373 17:43:12 -- common/autotest_common.sh@950 -- # wait 3406991 00:33:04.634 17:43:12 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:33:04.634 17:43:12 -- host/digest.sh@54 -- # local rw bs qd 00:33:04.634 17:43:12 -- host/digest.sh@56 -- # rw=randwrite 00:33:04.634 17:43:12 -- host/digest.sh@56 -- # bs=4096 00:33:04.634 17:43:12 -- host/digest.sh@56 -- # qd=128 00:33:04.634 17:43:12 -- host/digest.sh@58 -- # bperfpid=3407682 00:33:04.634 17:43:12 -- host/digest.sh@60 -- # waitforlisten 3407682 /var/tmp/bperf.sock 00:33:04.634 17:43:12 -- common/autotest_common.sh@819 -- # '[' -z 3407682 ']' 00:33:04.634 17:43:12 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:04.634 17:43:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.634 17:43:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:04.634 17:43:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.634 17:43:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:04.634 17:43:12 -- common/autotest_common.sh@10 -- # set +x 00:33:04.634 [2024-10-13 17:43:12.976380] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:04.634 [2024-10-13 17:43:12.976434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407682 ] 00:33:04.634 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.634 [2024-10-13 17:43:13.053353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.634 [2024-10-13 17:43:13.079491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.574 17:43:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:05.574 17:43:13 -- common/autotest_common.sh@852 -- # return 0 00:33:05.574 17:43:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.574 17:43:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.574 17:43:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:05.574 17:43:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.574 17:43:13 -- common/autotest_common.sh@10 -- # set +x 00:33:05.574 17:43:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.574 17:43:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.574 17:43:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.834 nvme0n1 00:33:05.834 17:43:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:05.834 17:43:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.834 17:43:14 -- common/autotest_common.sh@10 -- # set +x 00:33:05.834 17:43:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.834 17:43:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:05.834 17:43:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.834 Running I/O for 2 seconds... 00:33:05.834 [2024-10-13 17:43:14.302895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8618 00:33:05.834 [2024-10-13 17:43:14.303541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.834 [2024-10-13 17:43:14.303567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.834 [2024-10-13 17:43:14.316776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fb048 00:33:05.834 [2024-10-13 17:43:14.318208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.834 [2024-10-13 17:43:14.318227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.834 [2024-10-13 17:43:14.327021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e5220 00:33:05.834 [2024-10-13 17:43:14.327669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.834 [2024-10-13 17:43:14.327688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.834 [2024-10-13 17:43:14.338393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fda78 00:33:05.834 [2024-10-13 17:43:14.339459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.834 [2024-10-13 17:43:14.339475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.834 [2024-10-13 17:43:14.349742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f0bc0 00:33:05.834 [2024-10-13 17:43:14.350759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.834 [2024-10-13 17:43:14.350776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.361630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e6300 00:33:06.095 [2024-10-13 17:43:14.362378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.362394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.371906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f2d80 00:33:06.095 [2024-10-13 17:43:14.372351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.372366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.385524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f6890 00:33:06.095 [2024-10-13 17:43:14.386646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.386662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.395501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fac10 00:33:06.095 [2024-10-13 17:43:14.396539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.396554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.406121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f6458 00:33:06.095 [2024-10-13 17:43:14.406583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.406598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.419690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f81e0 00:33:06.095 [2024-10-13 17:43:14.420833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.420849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.429573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f2948 00:33:06.095 [2024-10-13 17:43:14.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.430662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.440552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ed4e8 00:33:06.095 [2024-10-13 17:43:14.441504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.441520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.451901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f46d0 00:33:06.095 [2024-10-13 17:43:14.452794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.452810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.464733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eff18 00:33:06.095 [2024-10-13 17:43:14.465783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.465798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.474570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ed0b0 00:33:06.095 [2024-10-13 17:43:14.475512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.475528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.485908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ed920 00:33:06.095 [2024-10-13 17:43:14.486858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.486874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.497285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e2c28 00:33:06.095 [2024-10-13 17:43:14.498294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.498309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.508647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fc998 00:33:06.095 [2024-10-13 17:43:14.509678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.509694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.519993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f35f0 00:33:06.095 [2024-10-13 17:43:14.521057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.521077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.531361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f3a28 00:33:06.095 [2024-10-13 17:43:14.532412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.532427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.542662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ff3c8 00:33:06.095 [2024-10-13 17:43:14.543703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.543719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.553933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e9168 00:33:06.095 [2024-10-13 17:43:14.554949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.554964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.095 [2024-10-13 17:43:14.565488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190edd58 00:33:06.095 [2024-10-13 17:43:14.566385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.095 [2024-10-13 17:43:14.566401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.096 [2024-10-13 17:43:14.578340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e5a90 00:33:06.096 [2024-10-13 17:43:14.579356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.096 [2024-10-13 17:43:14.579371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.096 [2024-10-13 17:43:14.589597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f4b08 00:33:06.096 [2024-10-13 17:43:14.590721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.096 [2024-10-13 17:43:14.590738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.096 [2024-10-13 17:43:14.599862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e7818 00:33:06.096 [2024-10-13 17:43:14.600643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.096 [2024-10-13 17:43:14.600658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.096 [2024-10-13 17:43:14.610932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f2510 00:33:06.096 [2024-10-13 17:43:14.611975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.096 [2024-10-13 17:43:14.611990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.621807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8a50 00:33:06.357 [2024-10-13 17:43:14.622278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.622294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.633917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e8d30 00:33:06.357 [2024-10-13 17:43:14.634936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.634952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.645300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e1f80 00:33:06.357 [2024-10-13 17:43:14.645790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.645805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.656681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fa3a0 00:33:06.357 [2024-10-13 17:43:14.657163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.657178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.669561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e1b48 00:33:06.357 [2024-10-13 17:43:14.670670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.670686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.680220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fc998 00:33:06.357 [2024-10-13 17:43:14.681158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.681175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.690263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f1430 00:33:06.357 [2024-10-13 17:43:14.691024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.691040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.702009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e88f8 00:33:06.357 [2024-10-13 17:43:14.702981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.702996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.715244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fda78 00:33:06.357 [2024-10-13 17:43:14.716492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.716507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.725479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fac10 00:33:06.357 [2024-10-13 17:43:14.726254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.726272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.736823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190dece0 00:33:06.357 [2024-10-13 17:43:14.738134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.738150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.748154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de038 00:33:06.357 [2024-10-13 17:43:14.749462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.749478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.758832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de470 00:33:06.357 [2024-10-13 17:43:14.759727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.759742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.769889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f0788 00:33:06.357 [2024-10-13 17:43:14.770884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.770900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.782171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fb8b8 00:33:06.357 [2024-10-13 17:43:14.783342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.783358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.791881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ebb98 00:33:06.357 [2024-10-13 17:43:14.792211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.792226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.803878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f7100 00:33:06.357 [2024-10-13 17:43:14.804929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.804944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.815320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de470 00:33:06.357 [2024-10-13 17:43:14.816145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.357 [2024-10-13 17:43:14.816161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.357 [2024-10-13 17:43:14.828029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190feb58 00:33:06.358 [2024-10-13 17:43:14.829482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.358 [2024-10-13 17:43:14.829498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.358 [2024-10-13 17:43:14.839300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8e88 00:33:06.358 [2024-10-13 17:43:14.840737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.358 [2024-10-13 17:43:14.840753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.358 [2024-10-13 17:43:14.850566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ddc00 00:33:06.358 [2024-10-13 17:43:14.851989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.358 [2024-10-13 17:43:14.852004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.358 [2024-10-13 17:43:14.861818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ef6a8 00:33:06.358 [2024-10-13 17:43:14.863240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.358 [2024-10-13 17:43:14.863256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.358 [2024-10-13 17:43:14.873086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190df118 00:33:06.358 [2024-10-13 17:43:14.874448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.358 [2024-10-13 17:43:14.874464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.618 [2024-10-13 17:43:14.884356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de038 00:33:06.618 [2024-10-13 17:43:14.885738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.618 [2024-10-13 17:43:14.885753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.618 [2024-10-13 17:43:14.895621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fe720 00:33:06.618 [2024-10-13 17:43:14.896997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.618 [2024-10-13 17:43:14.897012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.618 [2024-10-13 17:43:14.906907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190dfdc0 00:33:06.619 [2024-10-13 17:43:14.908274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.908290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.918186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f1ca0 00:33:06.619 [2024-10-13 17:43:14.919548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.919563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.929514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fe720 00:33:06.619 [2024-10-13 17:43:14.930857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.930873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.940812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e0a68 00:33:06.619 [2024-10-13 17:43:14.942158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.942174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.952389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eee38 00:33:06.619 [2024-10-13 17:43:14.953596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.953612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.965284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e95a0 00:33:06.619 [2024-10-13 17:43:14.966563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.966579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.976565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eaab8 00:33:06.619 [2024-10-13 17:43:14.977996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.978012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.986029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190df550 00:33:06.619 [2024-10-13 17:43:14.986404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:14.986420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:14.998918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e4de8 00:33:06.619 [2024-10-13 17:43:15.000354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.000371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.009320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e5a90 00:33:06.619 [2024-10-13 17:43:15.010217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.010232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.019486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f2d80 00:33:06.619 [2024-10-13 17:43:15.020274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.020292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.031270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3d08 00:33:06.619 [2024-10-13 17:43:15.032201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.032217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.042599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e5220 00:33:06.619 [2024-10-13 17:43:15.043722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.043737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.054123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e84c0 00:33:06.619 [2024-10-13 17:43:15.054480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.054495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.067264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f81e0 00:33:06.619 [2024-10-13 17:43:15.068810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.068826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.076746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ee5c8 00:33:06.619 [2024-10-13 17:43:15.077272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.077287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.088028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fbcf0 00:33:06.619 [2024-10-13 17:43:15.088380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.088396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.099339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f6cc8 00:33:06.619 [2024-10-13 17:43:15.099658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.099674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.110670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e1710 00:33:06.619 [2024-10-13 17:43:15.111047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.111066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.123367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e7818 00:33:06.619 [2024-10-13 17:43:15.124402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.619 [2024-10-13 17:43:15.133925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8a50 00:33:06.619 [2024-10-13 17:43:15.134762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.619 [2024-10-13 17:43:15.134779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.880 [2024-10-13 17:43:15.144710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eee38 00:33:06.880 [2024-10-13 17:43:15.145647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.880 [2024-10-13 17:43:15.145663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.880 [2024-10-13 17:43:15.156317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f9f68 00:33:06.880 [2024-10-13 17:43:15.157356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.157372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.169207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f6458 00:33:06.881 [2024-10-13 17:43:15.170375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.170391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.179873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e27f0 00:33:06.881 [2024-10-13 17:43:15.180861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.180877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.189950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de038 00:33:06.881 [2024-10-13 17:43:15.190758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.190774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.201106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ec408 00:33:06.881 [2024-10-13 17:43:15.202155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.202171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.213369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f2948 00:33:06.881 [2024-10-13 17:43:15.214443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.214459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.224753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e12d8 00:33:06.881 [2024-10-13 17:43:15.225959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.225975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.237411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f31b8 00:33:06.881 [2024-10-13 17:43:15.238672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.238688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.247691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3060 00:33:06.881 [2024-10-13 17:43:15.248636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.248652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.259155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3060 00:33:06.881 [2024-10-13 17:43:15.260114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.260131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.270514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f1868 00:33:06.881 [2024-10-13 17:43:15.271469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.271484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.281923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f1868 00:33:06.881 [2024-10-13 17:43:15.282880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.282896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.293296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fb048 00:33:06.881 [2024-10-13 17:43:15.294245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.294261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.304743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fb048 00:33:06.881 [2024-10-13 17:43:15.305706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.305721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.316137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3498 00:33:06.881 [2024-10-13 17:43:15.317092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.317113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.327597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3498 00:33:06.881 [2024-10-13 17:43:15.328547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.328564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.339166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f9f68 00:33:06.881 [2024-10-13 17:43:15.340110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.340127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.350594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eaef0 00:33:06.881 [2024-10-13 17:43:15.351392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.351408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.361969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3060 00:33:06.881 [2024-10-13 17:43:15.362748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.362763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.373149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ee190 00:33:06.881 [2024-10-13 17:43:15.374365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.374381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.383672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e01f8 00:33:06.881 [2024-10-13 17:43:15.384120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.384136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.881 [2024-10-13 17:43:15.394975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e6fa8 00:33:06.881 [2024-10-13 17:43:15.395536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.881 [2024-10-13 17:43:15.395551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.408549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de470 00:33:07.142 [2024-10-13 17:43:15.409686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.142 [2024-10-13 17:43:15.409702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.419057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e0630 00:33:07.142 [2024-10-13 17:43:15.419933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.142 [2024-10-13 17:43:15.419948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.429256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3d08 00:33:07.142 [2024-10-13 17:43:15.430005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.142 [2024-10-13 17:43:15.430021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.442783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fc560 00:33:07.142 [2024-10-13 17:43:15.444221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.142 [2024-10-13 17:43:15.444236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.452554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e7c50 00:33:07.142 [2024-10-13 17:43:15.453623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.142 [2024-10-13 17:43:15.453638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.464024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f3a28 00:33:07.142 [2024-10-13 17:43:15.465159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.142 [2024-10-13 17:43:15.465175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:07.142 [2024-10-13 17:43:15.474799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ee190 00:33:07.142 [2024-10-13 17:43:15.475127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.475143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.486074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e6300 00:33:07.143 [2024-10-13 17:43:15.486414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.486430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.499088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fa3a0 00:33:07.143 [2024-10-13 17:43:15.500362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.500378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.509463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e6738 00:33:07.143 [2024-10-13 17:43:15.510496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.510512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.519774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e23b8 00:33:07.143 [2024-10-13 17:43:15.520344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.520360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.530793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f1ca0 00:33:07.143 [2024-10-13 17:43:15.531534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.531550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.542883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e84c0 00:33:07.143 [2024-10-13 17:43:15.543914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.543930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.555159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e9168 00:33:07.143 [2024-10-13 17:43:15.556197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.556213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.564974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e01f8 00:33:07.143 [2024-10-13 17:43:15.565138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.565154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.576479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eb328 00:33:07.143 [2024-10-13 17:43:15.576793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.576809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.587782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fc560 00:33:07.143 [2024-10-13 17:43:15.588086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.588101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.599124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190de8a8 00:33:07.143 [2024-10-13 17:43:15.599429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.599445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.612401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ebb98 00:33:07.143 [2024-10-13 17:43:15.613042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.613067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.625388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eee38 00:33:07.143 [2024-10-13 17:43:15.626865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.626881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.635632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e99d8 00:33:07.143 [2024-10-13 17:43:15.636629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.636645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.646922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f7538 00:33:07.143 [2024-10-13 17:43:15.648455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.648471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.143 [2024-10-13 17:43:15.658236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3498 00:33:07.143 [2024-10-13 17:43:15.659778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.143 [2024-10-13 17:43:15.659793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.669512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fbcf0 00:33:07.403 [2024-10-13 17:43:15.671023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.671039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.680187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3d08 00:33:07.403 [2024-10-13 17:43:15.681284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.681299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.689995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f6020 00:33:07.403 [2024-10-13 17:43:15.690279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.690295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.701541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ed4e8 00:33:07.403 [2024-10-13 17:43:15.701954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.701969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.712864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e23b8 00:33:07.403 [2024-10-13 17:43:15.713253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.713269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.724190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f81e0 00:33:07.403 [2024-10-13 17:43:15.724579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.724595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.735309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190df988 00:33:07.403 [2024-10-13 17:43:15.736162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.736177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.746585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8e88 00:33:07.403 [2024-10-13 17:43:15.747387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.747402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:07.403 [2024-10-13 17:43:15.758614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e2c28 00:33:07.403 [2024-10-13 17:43:15.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.403 [2024-10-13 17:43:15.759755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.771471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e49b0 00:33:07.404 [2024-10-13 17:43:15.772826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.772842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.781157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eff18 00:33:07.404 [2024-10-13 17:43:15.781591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.781606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.792338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e5220 00:33:07.404 [2024-10-13 17:43:15.793211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.793226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.804308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e5220 00:33:07.404 [2024-10-13 17:43:15.805469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.805485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.816445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e84c0 00:33:07.404 [2024-10-13 17:43:15.817561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.826322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fc998 00:33:07.404 [2024-10-13 17:43:15.827041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.827057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.838308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e27f0 00:33:07.404 [2024-10-13 17:43:15.839459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.839475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.850494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ed4e8 00:33:07.404 [2024-10-13 17:43:15.851574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.851589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.860346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ea680 00:33:07.404 [2024-10-13 17:43:15.861032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.861047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.873859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fbcf0 00:33:07.404 [2024-10-13 17:43:15.874824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.874840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.883646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e6b70 00:33:07.404 [2024-10-13 17:43:15.884407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.884423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.896472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e12d8 00:33:07.404 [2024-10-13 17:43:15.898007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.907725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e7818 00:33:07.404 [2024-10-13 17:43:15.909246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.909264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:07.404 [2024-10-13 17:43:15.918976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3d08 00:33:07.404 [2024-10-13 17:43:15.920489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.404 [2024-10-13 17:43:15.920504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.930244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f7da8 00:33:07.665 [2024-10-13 17:43:15.931718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.931734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.941501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e6300 00:33:07.665 [2024-10-13 17:43:15.942980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.942996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.952756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e12d8 00:33:07.665 [2024-10-13 17:43:15.954228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.964018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fb480 00:33:07.665 [2024-10-13 17:43:15.965483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.965499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.974843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e01f8 00:33:07.665 [2024-10-13 17:43:15.975900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.975915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.985231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8a50 00:33:07.665 [2024-10-13 17:43:15.985904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.985919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:15.996953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fac10 00:33:07.665 [2024-10-13 17:43:15.998053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:15.998073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:16.009802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ed4e8 00:33:07.665 [2024-10-13 17:43:16.011109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:16.011124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:16.020204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e7c50 00:33:07.665 [2024-10-13 17:43:16.021279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:16.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:16.030491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190df550 00:33:07.665 [2024-10-13 17:43:16.031090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:16.031106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:16.041538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e8d30 00:33:07.665 [2024-10-13 17:43:16.042342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:16.042358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:07.665 [2024-10-13 17:43:16.053498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fc998 00:33:07.665 [2024-10-13 17:43:16.054602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.665 [2024-10-13 17:43:16.054618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.066364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f6cc8 00:33:07.666 [2024-10-13 17:43:16.067705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.067721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.076109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e3060 00:33:07.666 [2024-10-13 17:43:16.076652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.076667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.086901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ea248 00:33:07.666 [2024-10-13 17:43:16.087600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.087615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.098138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f4b08 00:33:07.666 [2024-10-13 17:43:16.098965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.098981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.109405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e12d8 00:33:07.666 [2024-10-13 17:43:16.110213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.120657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fef90 00:33:07.666 [2024-10-13 17:43:16.121420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.121436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.134129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e0630 00:33:07.666 [2024-10-13 17:43:16.134741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.134756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.145259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e0a68 00:33:07.666 [2024-10-13 17:43:16.145834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.145850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.156670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e12d8 00:33:07.666 [2024-10-13 17:43:16.157272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.157288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.167956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ea680 00:33:07.666 [2024-10-13 17:43:16.168510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.168526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:07.666 [2024-10-13 17:43:16.179259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fef90 00:33:07.666 [2024-10-13 17:43:16.179838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.666 [2024-10-13 17:43:16.179854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.189699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f5378 00:33:07.927 [2024-10-13 17:43:16.190539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.190555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.201854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e38d0 00:33:07.927 [2024-10-13 17:43:16.203040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.203058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.211532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f8a50 00:33:07.927 [2024-10-13 17:43:16.211914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.211929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.225001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190e4de8 00:33:07.927 [2024-10-13 17:43:16.226040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.226056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.235163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f9b30 00:33:07.927 [2024-10-13 17:43:16.235921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.246154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190eaef0 00:33:07.927 [2024-10-13 17:43:16.246957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.246973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.259423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fb048 00:33:07.927 [2024-10-13 17:43:16.260607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.260622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.270254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190f2d80 00:33:07.927 [2024-10-13 17:43:16.271513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.271528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.280072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190ea248 00:33:07.927 [2024-10-13 17:43:16.280899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.280915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:07.927 [2024-10-13 17:43:16.291439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a4c0) with pdu=0x2000190fda78 00:33:07.927 [2024-10-13 17:43:16.292070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.927 [2024-10-13 17:43:16.292086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:07.927 00:33:07.927 Latency(us) 00:33:07.927 [2024-10-13T15:43:16.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.927 [2024-10-13T15:43:16.451Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.927 nvme0n1 : 2.00 22433.78 87.63 0.00 0.00 5700.34 3959.47 14308.69 00:33:07.927 [2024-10-13T15:43:16.451Z] =================================================================================================================== 00:33:07.927 [2024-10-13T15:43:16.451Z] Total : 22433.78 87.63 0.00 0.00 5700.34 3959.47 14308.69 00:33:07.927 0 00:33:07.927 17:43:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:07.927 17:43:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:07.927 17:43:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:07.927 | .driver_specific 00:33:07.927 | .nvme_error 00:33:07.927 | .status_code 00:33:07.927 | .command_transient_transport_error' 00:33:07.927 17:43:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:08.189 17:43:16 -- host/digest.sh@71 -- # (( 176 > 0 )) 00:33:08.189 17:43:16 -- host/digest.sh@73 -- # killprocess 3407682 00:33:08.189 17:43:16 -- common/autotest_common.sh@926 -- # '[' -z 3407682 ']' 00:33:08.189 17:43:16 -- common/autotest_common.sh@930 -- # kill -0 3407682 00:33:08.189 17:43:16 -- common/autotest_common.sh@931 -- # uname 00:33:08.189 17:43:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:08.189 17:43:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3407682 00:33:08.189 17:43:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:08.189 17:43:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:08.189 17:43:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3407682' 00:33:08.189 killing process with pid 3407682 00:33:08.189 17:43:16 -- common/autotest_common.sh@945 -- # kill 3407682 00:33:08.189 Received shutdown signal, test time was about 2.000000 seconds 00:33:08.189 00:33:08.189 Latency(us) 00:33:08.189 [2024-10-13T15:43:16.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.189 [2024-10-13T15:43:16.713Z] =================================================================================================================== 00:33:08.189 [2024-10-13T15:43:16.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.189 17:43:16 -- common/autotest_common.sh@950 -- # wait 3407682 00:33:08.189 17:43:16 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:33:08.189 17:43:16 -- host/digest.sh@54 -- # local rw bs qd 00:33:08.189 17:43:16 -- host/digest.sh@56 -- # rw=randwrite 00:33:08.189 17:43:16 -- host/digest.sh@56 -- # bs=131072 00:33:08.189 17:43:16 -- host/digest.sh@56 -- # qd=16 00:33:08.189 17:43:16 -- host/digest.sh@58 -- # bperfpid=3408376 00:33:08.189 17:43:16 -- host/digest.sh@60 -- # waitforlisten 3408376 /var/tmp/bperf.sock 00:33:08.189 17:43:16 -- common/autotest_common.sh@819 -- # '[' -z 3408376 ']' 00:33:08.189 17:43:16 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:08.189 17:43:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.189 17:43:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:08.189 17:43:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.189 17:43:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:08.189 17:43:16 -- common/autotest_common.sh@10 -- # set +x 00:33:08.189 [2024-10-13 17:43:16.712385] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:08.189 [2024-10-13 17:43:16.712477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408376 ] 00:33:08.189 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.189 Zero copy mechanism will not be used. 00:33:08.450 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.450 [2024-10-13 17:43:16.794977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.450 [2024-10-13 17:43:16.821472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.020 17:43:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:09.020 17:43:17 -- common/autotest_common.sh@852 -- # return 0 00:33:09.020 17:43:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.020 17:43:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.279 17:43:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:09.279 17:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.279 17:43:17 -- common/autotest_common.sh@10 -- # set +x 00:33:09.279 17:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.279 17:43:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.279 17:43:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.540 nvme0n1 00:33:09.540 17:43:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:09.540 17:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.540 17:43:17 -- common/autotest_common.sh@10 -- # set +x 00:33:09.540 17:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.540 17:43:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:09.540 17:43:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.540 Zero copy mechanism will not be used. 00:33:09.540 Running I/O for 2 seconds... 00:33:09.540 [2024-10-13 17:43:17.971349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:17.971642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:17.971669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:17.979492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:17.979601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:17.979619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:17.983354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:17.983427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:17.983443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:17.986999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:17.987050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:17.987072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:17.991055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:17.991163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:17.991180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:17.997001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:17.997088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:17.997104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.003199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.003297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.003313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.006785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.006929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.006945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.010479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.010600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.010615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.014458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.014532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.014548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.018833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.018922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.018938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.024465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.024523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.028922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.029006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.029020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.032795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.032876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.032895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.037503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.037562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.037577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.046265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.046608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.046625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.050983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.051100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.051115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.540 [2024-10-13 17:43:18.055891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.540 [2024-10-13 17:43:18.056218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.540 [2024-10-13 17:43:18.056234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.064160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.064285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.064301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.068996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.069098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.069113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.073416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.073537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.073552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.077215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.077327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.077341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.080933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.081087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.081102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.084560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.084688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.084704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.088802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.088885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.088900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.094683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.801 [2024-10-13 17:43:18.094763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.801 [2024-10-13 17:43:18.094778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.801 [2024-10-13 17:43:18.097807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.097904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.100922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.100994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.101010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.104023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.104102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.107142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.107206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.107221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.110270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.110350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.113535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.113681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.113697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.116652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.116755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.116770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.119758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.119835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.119850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.122863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.122962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.122977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.125932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.126003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.129008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.129085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.129100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.132106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.132193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.132208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.135247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.135358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.135373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.138444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.138587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.138606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.141554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.141652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.141668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.144608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.144684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.144700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.147720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.147805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.147820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.150803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.150881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.150897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.153853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.153914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.153929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.158310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.158535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.158550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.164445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.164717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.164740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.171385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.171655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.171671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.177739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.177869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.177884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.185440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.185636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.185651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.193742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.193834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.193849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.201533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.201658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.201673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.208762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.208963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.208978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.216051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.216322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.216337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.802 [2024-10-13 17:43:18.222576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.802 [2024-10-13 17:43:18.222656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.802 [2024-10-13 17:43:18.222672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.228180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.228251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.232959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.233032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.233047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.237941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.237994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.238010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.244484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.244620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.244636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.251004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.251057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.251077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.256179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.256300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.256315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.261147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.261228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.261244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.265905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.265985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.266001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.269209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.269280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.269295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.272662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.272736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.272752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.279364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.279486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.279505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.285051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.285188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.285204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.292753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.293077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.293093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.296191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.296325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.296340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.302183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.302244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.302259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.307080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.307151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.307166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.310584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.310704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.310719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.314077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.314175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.314190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.318167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.318243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.318259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.803 [2024-10-13 17:43:18.321859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:09.803 [2024-10-13 17:43:18.321956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.803 [2024-10-13 17:43:18.321972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.325113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.325212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.325227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.328920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.329076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.329090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.337979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.338244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.338260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.347953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.348236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.348252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.358289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.358659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.358675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.368871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.369192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.369208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.379241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.379476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.379492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.389628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.389819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.389837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.400174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.400467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.400484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.409955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.410057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.410078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.419974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.420129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.420144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.430330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.430580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.430595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.440209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.440441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.440456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.450876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.451201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.451217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.460969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.461257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.461280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.471779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.472013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.472028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.482077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.482341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.492248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.492518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.492534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.502220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.502498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.502514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.512291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.512552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.512568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.522251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.522476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.522492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.532815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.533130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.533146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.543045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.543397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.543413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.553223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.553428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.562692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.562994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.572759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.573023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.573038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.063 [2024-10-13 17:43:18.583266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.063 [2024-10-13 17:43:18.583513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.063 [2024-10-13 17:43:18.583529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.593502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.593820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.593836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.602308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.602391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.602406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.612546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.612804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.612819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.622597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.622814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.622829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.631346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.631604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.631627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.636760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.636812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.636827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.642133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.642232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.642249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.649733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.649983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.649999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.659411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.659668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.659684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.669328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.669577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.669594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.679086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.679154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.679169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.688053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.688264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.688279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.696623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.696906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.696921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.705249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.705467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.705482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.712869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.712960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.721073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.721315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.721330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.730513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.324 [2024-10-13 17:43:18.730807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.324 [2024-10-13 17:43:18.730822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.324 [2024-10-13 17:43:18.736941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.737198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.737214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.745957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.746216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.746231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.753300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.753584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.753600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.761132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.761185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.761200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.767052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.767344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.767360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.773590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.773756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.773771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.779036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.779116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.779131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.783568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.783648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.783663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.787799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.787865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.787880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.793513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.793594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.793610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.800735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.800982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.800997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.806672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.806895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.806910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.813595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.813695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.813710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.821249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.821309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.821324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.829150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.829457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.829473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.836507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.836764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.836783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.325 [2024-10-13 17:43:18.844238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.325 [2024-10-13 17:43:18.844475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-10-13 17:43:18.844490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.852765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.852980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.852995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.861602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.861842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.861857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.872369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.872613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.872628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.881652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.881898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.881914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.891849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.892109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.892125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.902355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.902639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.902655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.913399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.913590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.913605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.923951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.924164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.924183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.933816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.934075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.934090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.944479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.944704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.944719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.954996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.955277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.965335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.965547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.965562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.976485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.976737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.976752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.986491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.586 [2024-10-13 17:43:18.986660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.586 [2024-10-13 17:43:18.986675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.586 [2024-10-13 17:43:18.997118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:18.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:18.997423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.007531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.007743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.007758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.017890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.018264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.018281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.028421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.028672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.028688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.038166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.038445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.038461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.047304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.047561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.057567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.057802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.057818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.068127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.068372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.068388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.078175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.078477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.078493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.088122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.088309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.088324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.098237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.098363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.098381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.587 [2024-10-13 17:43:19.108287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.587 [2024-10-13 17:43:19.108557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.587 [2024-10-13 17:43:19.108572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.118521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.118803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.118818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.129031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.129284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.129299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.139167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.139438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.139453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.149398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.149684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.159820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.160050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.160070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.170322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.170621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.180849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.181134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.191105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.191296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.191317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.201302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.201523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.201538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.211443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.211702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.211718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.221725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.221986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.222001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.231340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.231625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.231640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.241464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.241669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.241684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.252139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.252258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.252273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.848 [2024-10-13 17:43:19.262609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.848 [2024-10-13 17:43:19.262886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.848 [2024-10-13 17:43:19.262902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.273007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.273278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.273292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.283403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.283633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.283649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.293907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.294183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.294199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.303931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.304184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.304200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.313701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.314001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.314017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.323449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.323660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.323675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.333850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.334165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.334182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.343253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.343508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.343523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.353228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.353491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.353506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.849 [2024-10-13 17:43:19.363623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:10.849 [2024-10-13 17:43:19.363892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.849 [2024-10-13 17:43:19.363910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.374008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.374249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.374264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.384234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.384501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.384515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.394494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.394746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.394762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.405023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.405296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.405312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.415343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.415607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.415623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.425720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.425998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.426014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.435776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.436028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.436043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.445942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.446246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.446262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.456131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.456362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.456378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.465788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.110 [2024-10-13 17:43:19.466042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.110 [2024-10-13 17:43:19.466058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.110 [2024-10-13 17:43:19.476006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.476284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.476300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.486272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.486409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.486424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.496128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.496367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.496381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.506276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.506527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.506542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.516739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.516998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.517013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.526956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.527203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.527219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.536709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.536984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.537000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.547044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.547265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.547280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.556808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.557021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.557036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.566691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.566936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.566951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.577081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.577365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.577381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.587268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.587518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.587533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.597641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.597902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.597918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.606390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.606657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.606672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.616560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.616819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.616835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.624655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.624926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.624946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.111 [2024-10-13 17:43:19.632802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.111 [2024-10-13 17:43:19.632880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.111 [2024-10-13 17:43:19.632895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.641217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.641497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.641512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.648868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.649001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.649016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.654998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.655287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.655302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.663517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.663748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.663763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.672430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.672700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.672715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.679880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.680132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.680147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.686807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.687075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.687090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.694610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.694959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.694975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.701927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.702011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.702026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.706517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.706823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.706838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.715079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.715270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.715285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.721398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.721463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.721478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.729186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.729459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.729475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.737486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.737554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.737569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.744765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.745021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.745036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.753359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.753425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.753440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.760907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.760988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.761004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.768155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.768265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.768280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.775986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.776049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.776069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.782807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.782863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.782878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.789542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.789702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.796741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.796964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.796980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.804600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.804692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.804707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.812696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.812974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.812990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.822571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.822801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.822819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.832688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.373 [2024-10-13 17:43:19.832972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.373 [2024-10-13 17:43:19.832987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.373 [2024-10-13 17:43:19.843092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.843334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.843349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.374 [2024-10-13 17:43:19.852591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.852805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.852820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.374 [2024-10-13 17:43:19.862161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.862403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.862417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.374 [2024-10-13 17:43:19.872268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.872472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.872487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.374 [2024-10-13 17:43:19.879636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.879906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.879921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.374 [2024-10-13 17:43:19.885900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.885989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.886004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.374 [2024-10-13 17:43:19.892608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.374 [2024-10-13 17:43:19.892707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.374 [2024-10-13 17:43:19.892722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.634 [2024-10-13 17:43:19.899222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.634 [2024-10-13 17:43:19.899451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.634 [2024-10-13 17:43:19.899466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.634 [2024-10-13 17:43:19.906890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.907121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.907136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.915483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.915729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.915750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.922133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.922193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.922208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.928808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.929026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.929041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.934561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.934653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.934668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.938639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.938911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.938927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.946224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.946420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.946434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.952984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.953061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.953084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.956504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.956567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.956582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.635 [2024-10-13 17:43:19.962098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x245a9a0) with pdu=0x2000190fef90 00:33:11.635 [2024-10-13 17:43:19.962495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.635 [2024-10-13 17:43:19.962511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.635 00:33:11.635 Latency(us) 00:33:11.635 [2024-10-13T15:43:20.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.635 [2024-10-13T15:43:20.159Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:11.635 nvme0n1 : 2.00 3995.62 499.45 0.00 0.00 3997.27 1447.25 11250.35 00:33:11.635 [2024-10-13T15:43:20.159Z] =================================================================================================================== 00:33:11.635 [2024-10-13T15:43:20.159Z] Total : 3995.62 499.45 0.00 0.00 3997.27 1447.25 11250.35 00:33:11.635 0 00:33:11.635 17:43:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:11.635 17:43:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:11.635 17:43:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:11.635 | .driver_specific 00:33:11.635 | .nvme_error 00:33:11.635 | .status_code 00:33:11.635 | .command_transient_transport_error' 00:33:11.635 17:43:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:11.896 17:43:20 -- host/digest.sh@71 -- # (( 258 > 0 )) 00:33:11.896 17:43:20 -- host/digest.sh@73 -- # killprocess 3408376 00:33:11.896 17:43:20 -- common/autotest_common.sh@926 -- # '[' -z 3408376 ']' 00:33:11.896 17:43:20 -- common/autotest_common.sh@930 -- # kill -0 3408376 00:33:11.896 17:43:20 -- common/autotest_common.sh@931 -- # uname 00:33:11.896 17:43:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.896 17:43:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3408376 00:33:11.896 17:43:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:11.896 17:43:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:11.896 17:43:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3408376' 00:33:11.896 killing process with pid 3408376 00:33:11.896 17:43:20 -- common/autotest_common.sh@945 -- # kill 3408376 00:33:11.896 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.896 00:33:11.896 Latency(us) 00:33:11.896 [2024-10-13T15:43:20.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.896 [2024-10-13T15:43:20.420Z] =================================================================================================================== 00:33:11.896 [2024-10-13T15:43:20.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.896 17:43:20 -- common/autotest_common.sh@950 -- # wait 3408376 00:33:11.896 17:43:20 -- host/digest.sh@115 -- # killprocess 3406060 00:33:11.896 17:43:20 -- common/autotest_common.sh@926 -- # '[' -z 3406060 ']' 00:33:11.896 17:43:20 -- common/autotest_common.sh@930 -- # kill -0 3406060 00:33:11.896 17:43:20 -- common/autotest_common.sh@931 -- # uname 00:33:11.896 17:43:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.896 17:43:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3406060 00:33:11.896 17:43:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:11.896 17:43:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:11.896 17:43:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3406060' 00:33:11.896 killing process with pid 3406060 00:33:11.896 17:43:20 -- common/autotest_common.sh@945 -- # kill 3406060 00:33:11.896 17:43:20 -- common/autotest_common.sh@950 -- # wait 3406060 00:33:12.157 00:33:12.157 real 0m16.059s 00:33:12.157 user 0m31.474s 00:33:12.157 sys 0m3.575s 00:33:12.157 17:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:12.157 17:43:20 -- common/autotest_common.sh@10 -- # set +x 00:33:12.157 ************************************ 00:33:12.157 END TEST nvmf_digest_error 00:33:12.157 ************************************ 00:33:12.157 17:43:20 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:33:12.157 17:43:20 -- host/digest.sh@139 -- # nvmftestfini 00:33:12.157 17:43:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:12.157 17:43:20 -- nvmf/common.sh@116 -- # sync 00:33:12.157 17:43:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:12.157 17:43:20 -- nvmf/common.sh@119 -- # set +e 00:33:12.157 17:43:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:12.157 17:43:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:12.157 rmmod nvme_tcp 00:33:12.157 rmmod nvme_fabrics 00:33:12.157 rmmod nvme_keyring 00:33:12.157 17:43:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:12.157 17:43:20 -- nvmf/common.sh@123 -- # set -e 00:33:12.157 17:43:20 -- nvmf/common.sh@124 -- # return 0 00:33:12.157 17:43:20 -- nvmf/common.sh@477 -- # '[' -n 3406060 ']' 00:33:12.157 17:43:20 -- nvmf/common.sh@478 -- # killprocess 3406060 00:33:12.157 17:43:20 -- common/autotest_common.sh@926 -- # '[' -z 3406060 ']' 00:33:12.157 17:43:20 -- common/autotest_common.sh@930 -- # kill -0 3406060 00:33:12.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3406060) - No such process 00:33:12.157 17:43:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3406060 is not found' 00:33:12.157 Process with pid 3406060 is not found 00:33:12.157 17:43:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:12.157 17:43:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:12.157 17:43:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:12.157 17:43:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:12.157 17:43:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:12.157 17:43:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.157 17:43:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:12.157 17:43:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.701 17:43:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:14.701 00:33:14.701 real 0m41.640s 00:33:14.701 user 1m5.113s 00:33:14.701 sys 0m12.776s 00:33:14.701 17:43:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:14.702 17:43:22 -- common/autotest_common.sh@10 -- # set +x 00:33:14.702 ************************************ 00:33:14.702 END TEST nvmf_digest 00:33:14.702 ************************************ 00:33:14.702 17:43:22 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:33:14.702 17:43:22 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:33:14.702 17:43:22 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:33:14.702 17:43:22 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:14.702 17:43:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:14.702 17:43:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:14.702 17:43:22 -- common/autotest_common.sh@10 -- # set +x 00:33:14.702 ************************************ 00:33:14.702 START TEST nvmf_bdevperf 00:33:14.702 ************************************ 00:33:14.702 17:43:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:14.702 * Looking for test storage... 00:33:14.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:14.702 17:43:22 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.702 17:43:22 -- nvmf/common.sh@7 -- # uname -s 00:33:14.702 17:43:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.702 17:43:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.702 17:43:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.702 17:43:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.702 17:43:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.702 17:43:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.702 17:43:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.702 17:43:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.702 17:43:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.702 17:43:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.702 17:43:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:14.702 17:43:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:14.702 17:43:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.702 17:43:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.702 17:43:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.702 17:43:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.702 17:43:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.702 17:43:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.702 17:43:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.702 17:43:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.702 17:43:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.702 17:43:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.702 17:43:22 -- paths/export.sh@5 -- # export PATH 00:33:14.702 17:43:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.702 17:43:22 -- nvmf/common.sh@46 -- # : 0 00:33:14.702 17:43:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:14.702 17:43:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:14.702 17:43:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:14.702 17:43:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.702 17:43:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.702 17:43:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:14.702 17:43:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:14.702 17:43:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:14.702 17:43:22 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:14.702 17:43:22 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:14.702 17:43:22 -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:14.702 17:43:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:14.702 17:43:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.702 17:43:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:14.702 17:43:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:14.702 17:43:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:14.702 17:43:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.702 17:43:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.702 17:43:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.702 17:43:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:14.702 17:43:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:14.702 17:43:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:14.702 17:43:22 -- common/autotest_common.sh@10 -- # set +x 00:33:21.288 17:43:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:21.288 17:43:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:21.288 17:43:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:21.288 17:43:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:21.288 17:43:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:21.288 17:43:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:21.288 17:43:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:21.288 17:43:29 -- nvmf/common.sh@294 -- # net_devs=() 00:33:21.288 17:43:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:21.288 17:43:29 -- nvmf/common.sh@295 -- # e810=() 00:33:21.288 17:43:29 -- nvmf/common.sh@295 -- # local -ga e810 00:33:21.288 17:43:29 -- nvmf/common.sh@296 -- # x722=() 00:33:21.288 17:43:29 -- nvmf/common.sh@296 -- # local -ga x722 00:33:21.288 17:43:29 -- nvmf/common.sh@297 -- # mlx=() 00:33:21.288 17:43:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:21.288 17:43:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.288 17:43:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:21.288 17:43:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:21.288 17:43:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:21.288 17:43:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:21.288 17:43:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:21.288 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:21.288 17:43:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:21.288 17:43:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:21.288 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:21.288 17:43:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:21.288 17:43:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:21.288 17:43:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.288 17:43:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:21.288 17:43:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.288 17:43:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:21.288 Found net devices under 0000:31:00.0: cvl_0_0 00:33:21.288 17:43:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.288 17:43:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:21.288 17:43:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.288 17:43:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:21.288 17:43:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.288 17:43:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:21.288 Found net devices under 0000:31:00.1: cvl_0_1 00:33:21.288 17:43:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.288 17:43:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:21.288 17:43:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:21.288 17:43:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:21.288 17:43:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:21.288 17:43:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.288 17:43:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.288 17:43:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.288 17:43:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:21.288 17:43:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.288 17:43:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.288 17:43:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:21.288 17:43:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.288 17:43:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.288 17:43:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:21.288 17:43:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:21.288 17:43:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.549 17:43:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.549 17:43:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.549 17:43:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.549 17:43:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:21.549 17:43:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.817 17:43:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.817 17:43:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.817 17:43:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:21.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:33:21.817 00:33:21.817 --- 10.0.0.2 ping statistics --- 00:33:21.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.817 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:33:21.817 17:43:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:33:21.817 00:33:21.817 --- 10.0.0.1 ping statistics --- 00:33:21.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.817 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:33:21.817 17:43:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.817 17:43:30 -- nvmf/common.sh@410 -- # return 0 00:33:21.817 17:43:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:21.817 17:43:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.817 17:43:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:21.817 17:43:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:21.817 17:43:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.817 17:43:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:21.817 17:43:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:21.817 17:43:30 -- host/bdevperf.sh@25 -- # tgt_init 00:33:21.817 17:43:30 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:21.817 17:43:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:21.817 17:43:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:21.817 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:33:21.817 17:43:30 -- nvmf/common.sh@469 -- # nvmfpid=3413414 00:33:21.817 17:43:30 -- nvmf/common.sh@470 -- # waitforlisten 3413414 00:33:21.817 17:43:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:21.817 17:43:30 -- common/autotest_common.sh@819 -- # '[' -z 3413414 ']' 00:33:21.817 17:43:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.817 17:43:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:21.817 17:43:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.817 17:43:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:21.817 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:33:21.817 [2024-10-13 17:43:30.205963] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:21.817 [2024-10-13 17:43:30.206027] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.817 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.817 [2024-10-13 17:43:30.274567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:21.817 [2024-10-13 17:43:30.317411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:21.817 [2024-10-13 17:43:30.317533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.817 [2024-10-13 17:43:30.317541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.817 [2024-10-13 17:43:30.317547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.817 [2024-10-13 17:43:30.317695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:21.817 [2024-10-13 17:43:30.317874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.817 [2024-10-13 17:43:30.317876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.869 17:43:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:22.869 17:43:31 -- common/autotest_common.sh@852 -- # return 0 00:33:22.869 17:43:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:22.869 17:43:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:22.869 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:33:22.869 17:43:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.869 17:43:31 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:22.869 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.869 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:33:22.869 [2024-10-13 17:43:31.095839] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.869 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.869 17:43:31 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:22.869 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.869 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:33:22.869 Malloc0 00:33:22.869 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.869 17:43:31 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:22.869 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.869 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:33:22.869 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.869 17:43:31 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:22.869 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.869 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:33:22.869 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.869 17:43:31 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.869 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.869 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:33:22.869 [2024-10-13 17:43:31.159471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.869 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.869 17:43:31 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:22.869 17:43:31 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:22.869 17:43:31 -- nvmf/common.sh@520 -- # config=() 00:33:22.869 17:43:31 -- nvmf/common.sh@520 -- # local subsystem config 00:33:22.869 17:43:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:22.869 17:43:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:22.869 { 00:33:22.869 "params": { 00:33:22.869 "name": "Nvme$subsystem", 00:33:22.869 "trtype": "$TEST_TRANSPORT", 00:33:22.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:22.869 "adrfam": "ipv4", 00:33:22.869 "trsvcid": "$NVMF_PORT", 00:33:22.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:22.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:22.869 "hdgst": ${hdgst:-false}, 00:33:22.869 "ddgst": ${ddgst:-false} 00:33:22.869 }, 00:33:22.869 "method": "bdev_nvme_attach_controller" 00:33:22.869 } 00:33:22.869 EOF 00:33:22.869 )") 00:33:22.869 17:43:31 -- nvmf/common.sh@542 -- # cat 00:33:22.869 17:43:31 -- nvmf/common.sh@544 -- # jq . 00:33:22.869 17:43:31 -- nvmf/common.sh@545 -- # IFS=, 00:33:22.869 17:43:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:22.869 "params": { 00:33:22.869 "name": "Nvme1", 00:33:22.869 "trtype": "tcp", 00:33:22.869 "traddr": "10.0.0.2", 00:33:22.869 "adrfam": "ipv4", 00:33:22.869 "trsvcid": "4420", 00:33:22.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:22.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:22.869 "hdgst": false, 00:33:22.869 "ddgst": false 00:33:22.869 }, 00:33:22.869 "method": "bdev_nvme_attach_controller" 00:33:22.869 }' 00:33:22.869 [2024-10-13 17:43:31.221762] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:22.869 [2024-10-13 17:43:31.221818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413506 ] 00:33:22.869 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.869 [2024-10-13 17:43:31.283208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.869 [2024-10-13 17:43:31.312182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.131 Running I/O for 1 seconds... 00:33:24.073 00:33:24.073 Latency(us) 00:33:24.073 [2024-10-13T15:43:32.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.073 [2024-10-13T15:43:32.597Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:24.073 Verification LBA range: start 0x0 length 0x4000 00:33:24.073 Nvme1n1 : 1.01 13961.82 54.54 0.00 0.00 9124.69 1331.20 15073.28 00:33:24.073 [2024-10-13T15:43:32.597Z] =================================================================================================================== 00:33:24.073 [2024-10-13T15:43:32.597Z] Total : 13961.82 54.54 0.00 0.00 9124.69 1331.20 15073.28 00:33:24.332 17:43:32 -- host/bdevperf.sh@30 -- # bdevperfpid=3413845 00:33:24.332 17:43:32 -- host/bdevperf.sh@32 -- # sleep 3 00:33:24.332 17:43:32 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:24.332 17:43:32 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:24.332 17:43:32 -- nvmf/common.sh@520 -- # config=() 00:33:24.332 17:43:32 -- nvmf/common.sh@520 -- # local subsystem config 00:33:24.332 17:43:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:24.332 17:43:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:24.332 { 00:33:24.332 "params": { 00:33:24.332 "name": "Nvme$subsystem", 00:33:24.332 "trtype": "$TEST_TRANSPORT", 00:33:24.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.332 "adrfam": "ipv4", 00:33:24.332 "trsvcid": "$NVMF_PORT", 00:33:24.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.332 "hdgst": ${hdgst:-false}, 00:33:24.332 "ddgst": ${ddgst:-false} 00:33:24.332 }, 00:33:24.332 "method": "bdev_nvme_attach_controller" 00:33:24.332 } 00:33:24.332 EOF 00:33:24.332 )") 00:33:24.332 17:43:32 -- nvmf/common.sh@542 -- # cat 00:33:24.332 17:43:32 -- nvmf/common.sh@544 -- # jq . 00:33:24.332 17:43:32 -- nvmf/common.sh@545 -- # IFS=, 00:33:24.332 17:43:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:24.332 "params": { 00:33:24.332 "name": "Nvme1", 00:33:24.332 "trtype": "tcp", 00:33:24.332 "traddr": "10.0.0.2", 00:33:24.332 "adrfam": "ipv4", 00:33:24.332 "trsvcid": "4420", 00:33:24.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:24.332 "hdgst": false, 00:33:24.333 "ddgst": false 00:33:24.333 }, 00:33:24.333 "method": "bdev_nvme_attach_controller" 00:33:24.333 }' 00:33:24.333 [2024-10-13 17:43:32.665604] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:24.333 [2024-10-13 17:43:32.665659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413845 ] 00:33:24.333 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.333 [2024-10-13 17:43:32.727115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.333 [2024-10-13 17:43:32.754363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.592 Running I/O for 15 seconds... 00:33:27.137 17:43:35 -- host/bdevperf.sh@33 -- # kill -9 3413414 00:33:27.137 17:43:35 -- host/bdevperf.sh@35 -- # sleep 3 00:33:27.137 [2024-10-13 17:43:35.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.137 [2024-10-13 17:43:35.634768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-10-13 17:43:35.634779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.634991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.634998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.138 [2024-10-13 17:43:35.635430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.138 [2024-10-13 17:43:35.635464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.138 [2024-10-13 17:43:35.635474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.635961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.635988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.635995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.636015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.139 [2024-10-13 17:43:35.636033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.139 [2024-10-13 17:43:35.636167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.139 [2024-10-13 17:43:35.636175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.140 [2024-10-13 17:43:35.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.140 [2024-10-13 17:43:35.636210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.140 [2024-10-13 17:43:35.636278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.140 [2024-10-13 17:43:35.636447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.140 [2024-10-13 17:43:35.636464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.140 [2024-10-13 17:43:35.636516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.140 [2024-10-13 17:43:35.636533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb280 is same with the state(5) to be set 00:33:27.140 [2024-10-13 17:43:35.636550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.140 [2024-10-13 17:43:35.636558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.140 [2024-10-13 17:43:35.636568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117776 len:8 PRP1 0x0 PRP2 0x0 00:33:27.140 [2024-10-13 17:43:35.636577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.140 [2024-10-13 17:43:35.636615] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dbb280 was disconnected and freed. reset controller. 00:33:27.140 [2024-10-13 17:43:35.639061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.140 [2024-10-13 17:43:35.639115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.140 [2024-10-13 17:43:35.639847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.140 [2024-10-13 17:43:35.639935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.140 [2024-10-13 17:43:35.639952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.140 [2024-10-13 17:43:35.639960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.140 [2024-10-13 17:43:35.640147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.140 [2024-10-13 17:43:35.640292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.140 [2024-10-13 17:43:35.640302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.140 [2024-10-13 17:43:35.640310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.140 [2024-10-13 17:43:35.642511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.140 [2024-10-13 17:43:35.651770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.140 [2024-10-13 17:43:35.652385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.140 [2024-10-13 17:43:35.652769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.140 [2024-10-13 17:43:35.652784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.140 [2024-10-13 17:43:35.652795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.140 [2024-10-13 17:43:35.652922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.140 [2024-10-13 17:43:35.653049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.140 [2024-10-13 17:43:35.653058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.140 [2024-10-13 17:43:35.653081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.140 [2024-10-13 17:43:35.655338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.402 [2024-10-13 17:43:35.664023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.402 [2024-10-13 17:43:35.664619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.402 [2024-10-13 17:43:35.664956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.402 [2024-10-13 17:43:35.664971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.402 [2024-10-13 17:43:35.664980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.402 [2024-10-13 17:43:35.665096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.402 [2024-10-13 17:43:35.665242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.402 [2024-10-13 17:43:35.665251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.402 [2024-10-13 17:43:35.665259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.402 [2024-10-13 17:43:35.667518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.402 [2024-10-13 17:43:35.676596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.402 [2024-10-13 17:43:35.677158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.402 [2024-10-13 17:43:35.677545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.402 [2024-10-13 17:43:35.677559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.402 [2024-10-13 17:43:35.677569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.402 [2024-10-13 17:43:35.677713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.402 [2024-10-13 17:43:35.677841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.402 [2024-10-13 17:43:35.677849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.402 [2024-10-13 17:43:35.677858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.402 [2024-10-13 17:43:35.680162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.402 [2024-10-13 17:43:35.689011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.402 [2024-10-13 17:43:35.689480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.402 [2024-10-13 17:43:35.689684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.402 [2024-10-13 17:43:35.689695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.402 [2024-10-13 17:43:35.689703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.402 [2024-10-13 17:43:35.689847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.402 [2024-10-13 17:43:35.690008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.402 [2024-10-13 17:43:35.690017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.690024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.692211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.701553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.702060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.702465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.702479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.702489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.702651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.702797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.702806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.702814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.704985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.713864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.714401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.714771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.714786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.714795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.714994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.715223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.715235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.715243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.717481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.726470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.727074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.727416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.727430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.727440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.727638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.727748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.727757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.727765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.730084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.739016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.739519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.739849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.739863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.739873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.740017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.740155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.740166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.740174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.742485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.751635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.752250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.752616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.752631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.752640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.752838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.753003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.753013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.753021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.755180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.764130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.764702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.765077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.765093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.765103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.765301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.765411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.765421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.765428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.767779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.776610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.777073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.777365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.777377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.777385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.777528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.777653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.777664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.777671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.779944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.789075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.789645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.790022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.790037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.790047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.790236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.790439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.790450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.790457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.792644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.801701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.802210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.802525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.802536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.802544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.802651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.802793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.403 [2024-10-13 17:43:35.802802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.403 [2024-10-13 17:43:35.802809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.403 [2024-10-13 17:43:35.804993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.403 [2024-10-13 17:43:35.814231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.403 [2024-10-13 17:43:35.814804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.815167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.403 [2024-10-13 17:43:35.815183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.403 [2024-10-13 17:43:35.815197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.403 [2024-10-13 17:43:35.815340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.403 [2024-10-13 17:43:35.815487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.815496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.815503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.817783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.826698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.827166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.827399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.827410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.827418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.827561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.827687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.827696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.827703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.830107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.839066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.839547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.839928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.839944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.839954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.840107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.840273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.840282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.840289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.842352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.851550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.852168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.852501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.852515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.852529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.852710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.852874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.852883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.852891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.855194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.863929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.864498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.864833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.864848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.864858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.865019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.865156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.865167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.865176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.867399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.876385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.876920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.877261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.877278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.877288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.877450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.877578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.877587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.877596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.879728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.888865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.889475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.889728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.889742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.889752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.889955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.890152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.890164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.890172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.892448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.901179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.901716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.902048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.902072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.902083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.902226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.902391] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.902400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.902408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.904708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.404 [2024-10-13 17:43:35.913700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.404 [2024-10-13 17:43:35.913998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.914343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.404 [2024-10-13 17:43:35.914355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.404 [2024-10-13 17:43:35.914367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.404 [2024-10-13 17:43:35.914474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.404 [2024-10-13 17:43:35.914599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.404 [2024-10-13 17:43:35.914608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.404 [2024-10-13 17:43:35.914615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.404 [2024-10-13 17:43:35.916873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.666 [2024-10-13 17:43:35.926304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.666 [2024-10-13 17:43:35.926792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.666 [2024-10-13 17:43:35.927089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.666 [2024-10-13 17:43:35.927100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.666 [2024-10-13 17:43:35.927117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.666 [2024-10-13 17:43:35.927259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.666 [2024-10-13 17:43:35.927407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.666 [2024-10-13 17:43:35.927416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.666 [2024-10-13 17:43:35.927423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.666 [2024-10-13 17:43:35.929605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.666 [2024-10-13 17:43:35.938802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.666 [2024-10-13 17:43:35.939324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.666 [2024-10-13 17:43:35.939544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.666 [2024-10-13 17:43:35.939560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.666 [2024-10-13 17:43:35.939569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.666 [2024-10-13 17:43:35.939768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.666 [2024-10-13 17:43:35.939898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.666 [2024-10-13 17:43:35.939907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.666 [2024-10-13 17:43:35.939915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.666 [2024-10-13 17:43:35.942127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.666 [2024-10-13 17:43:35.951137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.666 [2024-10-13 17:43:35.951668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.666 [2024-10-13 17:43:35.952044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.666 [2024-10-13 17:43:35.952059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.666 [2024-10-13 17:43:35.952078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.666 [2024-10-13 17:43:35.952258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.666 [2024-10-13 17:43:35.952404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.666 [2024-10-13 17:43:35.952413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.666 [2024-10-13 17:43:35.952421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:35.954610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:35.963642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:35.964241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:35.964619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:35.964634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:35.964643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:35.964787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:35.964897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:35.964915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:35.964923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:35.967172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:35.975912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:35.976468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:35.976801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:35.976816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:35.976825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:35.976987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:35.977178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:35.977189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:35.977197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:35.979603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:35.988476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:35.988922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:35.989229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:35.989241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:35.989249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:35.989391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:35.989535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:35.989544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:35.989551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:35.991696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.001116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.001567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.001909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.001919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.001927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.002114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.002258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:36.002267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:36.002280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:36.004570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.013583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.014116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.014491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.014506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.014516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.014714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.014861] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:36.014870] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:36.014878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:36.017034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.026010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.026622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.027004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.027019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.027029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.027183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.027330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:36.027340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:36.027347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:36.029514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.038568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.039141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.039516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.039530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.039540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.039665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.039793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:36.039802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:36.039810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:36.041931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.050987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.051530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.051858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.051874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.051884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.052027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.052167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:36.052177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:36.052185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:36.054407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.063436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.063993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.064348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.064364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.064374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.064554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.064719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.667 [2024-10-13 17:43:36.064728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.667 [2024-10-13 17:43:36.064735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.667 [2024-10-13 17:43:36.066979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.667 [2024-10-13 17:43:36.075685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.667 [2024-10-13 17:43:36.076212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.076544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.667 [2024-10-13 17:43:36.076559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.667 [2024-10-13 17:43:36.076568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.667 [2024-10-13 17:43:36.076694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.667 [2024-10-13 17:43:36.076840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.076850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.076857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.079031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.088372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.088933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.089270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.089286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.089295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.089475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.089585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.089594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.089602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.091864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.100808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.101371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.101754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.101768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.101778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.101957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.102095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.102106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.102114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.104156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.113280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.113900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.114279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.114295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.114304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.114466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.114612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.114621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.114628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.116690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.125618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.126214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.126556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.126571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.126582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.126707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.126836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.126846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.126853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.129009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.138101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.138689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.138907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.138924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.138933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.139105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.139271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.139281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.139289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.141614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.150592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.151149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.151498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.151513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.151523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.151666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.151813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.151822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.151830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.154153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.163080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.163587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.163916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.163932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.163940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.164072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.164235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.164244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.164251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.166356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.175515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.176034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.176261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.176276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.176286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.176466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.176576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.176585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.668 [2024-10-13 17:43:36.176593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.668 [2024-10-13 17:43:36.178784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.668 [2024-10-13 17:43:36.187949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.668 [2024-10-13 17:43:36.188491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.188864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.668 [2024-10-13 17:43:36.188878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.668 [2024-10-13 17:43:36.188888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.668 [2024-10-13 17:43:36.189032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.668 [2024-10-13 17:43:36.189153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.668 [2024-10-13 17:43:36.189163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.669 [2024-10-13 17:43:36.189171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.931 [2024-10-13 17:43:36.191504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.931 [2024-10-13 17:43:36.200407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.931 [2024-10-13 17:43:36.200895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.201227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.201239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.931 [2024-10-13 17:43:36.201252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.931 [2024-10-13 17:43:36.201341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.931 [2024-10-13 17:43:36.201501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.931 [2024-10-13 17:43:36.201511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.931 [2024-10-13 17:43:36.201519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.931 [2024-10-13 17:43:36.203777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.931 [2024-10-13 17:43:36.212782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.931 [2024-10-13 17:43:36.213292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.213602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.213613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.931 [2024-10-13 17:43:36.213621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.931 [2024-10-13 17:43:36.213800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.931 [2024-10-13 17:43:36.213942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.931 [2024-10-13 17:43:36.213952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.931 [2024-10-13 17:43:36.213961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.931 [2024-10-13 17:43:36.216131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.931 [2024-10-13 17:43:36.225382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.931 [2024-10-13 17:43:36.225867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.226200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.226212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.931 [2024-10-13 17:43:36.226220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.931 [2024-10-13 17:43:36.226381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.931 [2024-10-13 17:43:36.226527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.931 [2024-10-13 17:43:36.226536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.931 [2024-10-13 17:43:36.226543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.931 [2024-10-13 17:43:36.228761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.931 [2024-10-13 17:43:36.237791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.931 [2024-10-13 17:43:36.238336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.238668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.931 [2024-10-13 17:43:36.238679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.931 [2024-10-13 17:43:36.238686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.931 [2024-10-13 17:43:36.238832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.238957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.238965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.238972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.241158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.250204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.250683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.251016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.251027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.251035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.251218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.251362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.251371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.251378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.253683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.262710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.263275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.263649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.263663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.263673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.263817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.263945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.263955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.263963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.266211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.275163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.275635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.275974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.275987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.275995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.276127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.276293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.276303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.276310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.278623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.287391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.288025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.288422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.288437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.288447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.288591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.288719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.288728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.288736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.290890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.299706] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.300147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.300463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.300474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.300482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.300589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.300735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.300746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.300756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.302991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.312361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.312709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.313016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.313027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.313035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.313203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.313347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.313360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.313367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.315715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.324883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.325468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.325809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.325824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.325834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.326001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.326178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.326188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.326196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.328620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.337420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.337967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.338306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.338322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.338332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.338475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.338622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.338632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.338640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.340739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.349936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.350463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.350790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.350805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.350815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.932 [2024-10-13 17:43:36.350958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.932 [2024-10-13 17:43:36.351095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.932 [2024-10-13 17:43:36.351105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.932 [2024-10-13 17:43:36.351117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.932 [2024-10-13 17:43:36.353268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.932 [2024-10-13 17:43:36.362507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.932 [2024-10-13 17:43:36.363044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.363432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.932 [2024-10-13 17:43:36.363447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.932 [2024-10-13 17:43:36.363457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.363619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.363766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.363775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.363782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.366138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.374871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.375419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.375795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.375810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.375820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.375981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.376160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.376172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.376180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.378419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.387448] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.388014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.388374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.388389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.388399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.388579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.388688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.388698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.388705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.390722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.399757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.400174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.400564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.400578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.400588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.400695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.400841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.400850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.400858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.403197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.412220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.412670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.413001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.413012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.413020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.413187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.413295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.413303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.413311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.415453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.424603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.425045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.425364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.425375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.425383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.425544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.425705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.425713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.425721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.427906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.437006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.437620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.437876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.437892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.437902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.438046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.438183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.438193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.438201] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.440403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.933 [2024-10-13 17:43:36.449481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.933 [2024-10-13 17:43:36.449908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.450218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.933 [2024-10-13 17:43:36.450231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:27.933 [2024-10-13 17:43:36.450239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:27.933 [2024-10-13 17:43:36.450418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:27.933 [2024-10-13 17:43:36.450560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.933 [2024-10-13 17:43:36.450569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.933 [2024-10-13 17:43:36.450576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.933 [2024-10-13 17:43:36.452739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.462056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.462531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.462842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.462853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.462860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.462966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.463079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.463088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.463095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.465168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.474573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.474919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.475216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.475229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.475237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.475361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.475504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.475514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.475521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.477946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.486953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.487447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.487757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.487768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.487776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.487864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.488024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.488033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.488040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.490154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.499559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.499993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.500307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.500319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.500327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.500452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.500594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.500602] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.500609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.502749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.512082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.512524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.512821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.512836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.512843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.513003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.513188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.513198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.513205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.515385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.524406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.524822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.525011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.525022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.525029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.525159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.525321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.525331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.525338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.527649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.196 [2024-10-13 17:43:36.537032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.196 [2024-10-13 17:43:36.537343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.537530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.196 [2024-10-13 17:43:36.537541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.196 [2024-10-13 17:43:36.537548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.196 [2024-10-13 17:43:36.537691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.196 [2024-10-13 17:43:36.537852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.196 [2024-10-13 17:43:36.537861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.196 [2024-10-13 17:43:36.537868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.196 [2024-10-13 17:43:36.540108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.549588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.549985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.550301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.550313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.550328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.550471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.550577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.550586] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.550593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.552971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.562074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.562557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.562908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.562920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.562928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.563094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.563274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.563282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.563290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.565614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.574737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.575279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.575657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.575671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.575681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.575806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.575988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.575998] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.576006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.578385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.587133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.587771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.588147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.588163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.588173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.588357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.588486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.588495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.588503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.590762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.599496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.600123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.600499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.600514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.600523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.600685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.600850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.600859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.600867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.603134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.612066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.612641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.612901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.612916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.612926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.613052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.613227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.613237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.613245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.615376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.624479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.625025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.625341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.625357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.625366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.625492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.625643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.625652] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.625659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.627797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.636969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.637604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.637979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.637994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.638005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.638176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.638307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.638318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.638326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.197 [2024-10-13 17:43:36.640426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.197 [2024-10-13 17:43:36.649488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.197 [2024-10-13 17:43:36.649948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.650246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.197 [2024-10-13 17:43:36.650259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.197 [2024-10-13 17:43:36.650267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.197 [2024-10-13 17:43:36.650428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.197 [2024-10-13 17:43:36.650607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.197 [2024-10-13 17:43:36.650616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.197 [2024-10-13 17:43:36.650623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.198 [2024-10-13 17:43:36.652913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.198 [2024-10-13 17:43:36.661920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.198 [2024-10-13 17:43:36.662463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.662790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.662805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.198 [2024-10-13 17:43:36.662814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.198 [2024-10-13 17:43:36.662940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.198 [2024-10-13 17:43:36.663095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.198 [2024-10-13 17:43:36.663109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.198 [2024-10-13 17:43:36.663117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.198 [2024-10-13 17:43:36.665375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.198 [2024-10-13 17:43:36.674493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.198 [2024-10-13 17:43:36.674813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.675125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.675139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.198 [2024-10-13 17:43:36.675147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.198 [2024-10-13 17:43:36.675292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.198 [2024-10-13 17:43:36.675435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.198 [2024-10-13 17:43:36.675444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.198 [2024-10-13 17:43:36.675451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.198 [2024-10-13 17:43:36.677568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.198 [2024-10-13 17:43:36.686881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.198 [2024-10-13 17:43:36.687387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.687716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.687731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.198 [2024-10-13 17:43:36.687741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.198 [2024-10-13 17:43:36.687885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.198 [2024-10-13 17:43:36.688032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.198 [2024-10-13 17:43:36.688041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.198 [2024-10-13 17:43:36.688049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.198 [2024-10-13 17:43:36.690223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.198 [2024-10-13 17:43:36.699276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.198 [2024-10-13 17:43:36.699862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.700492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.700514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.198 [2024-10-13 17:43:36.700524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.198 [2024-10-13 17:43:36.700687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.198 [2024-10-13 17:43:36.700851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.198 [2024-10-13 17:43:36.700861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.198 [2024-10-13 17:43:36.700874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.198 [2024-10-13 17:43:36.703009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.198 [2024-10-13 17:43:36.711887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.198 [2024-10-13 17:43:36.712446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.712815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.198 [2024-10-13 17:43:36.712830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.198 [2024-10-13 17:43:36.712840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.198 [2024-10-13 17:43:36.712983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.198 [2024-10-13 17:43:36.713101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.198 [2024-10-13 17:43:36.713110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.198 [2024-10-13 17:43:36.713118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.198 [2024-10-13 17:43:36.715375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.724397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.724881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.725091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.725103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.725111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.725235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.725379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.725390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.725397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.727512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.736775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.737315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.737690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.737705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.737715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.737877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.737986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.737995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.738003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.740163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.749264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.749752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.750152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.750169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.750179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.750342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.750507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.750516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.750524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.752785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.761696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.762137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.762341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.762355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.762365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.762545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.762673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.762683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.762691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.764883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.774289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.774864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.775133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.775150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.775160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.775304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.775469] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.775479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.775487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.777863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.786777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.787325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.787559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.787573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.787582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.787745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.787929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.787939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.787946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.790190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.799390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.461 [2024-10-13 17:43:36.799864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.800238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.461 [2024-10-13 17:43:36.800250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.461 [2024-10-13 17:43:36.800258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.461 [2024-10-13 17:43:36.800438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.461 [2024-10-13 17:43:36.800544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.461 [2024-10-13 17:43:36.800553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.461 [2024-10-13 17:43:36.800560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.461 [2024-10-13 17:43:36.802904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.461 [2024-10-13 17:43:36.812056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.812472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.812801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.812812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.812820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.812944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.813092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.813102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.813110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.815271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.824368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.824945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.825306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.825323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.825333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.825476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.825678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.825688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.825695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.827999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.836730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.837342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.837715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.837730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.837740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.837865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.837939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.837946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.837954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.840258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.849340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.849904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.850282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.850298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.850308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.850451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.850616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.850625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.850633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.852876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.862035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.862522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.862836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.862853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.862861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.862986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.863171] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.863181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.863189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.865388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.874430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.875014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.875406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.875422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.875432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.875575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.875721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.875730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.875737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.877821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.886852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.887343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.887653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.887664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.887673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.887815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.887976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.887985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.887993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.890293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.899096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.899555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.899719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.899729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.899742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.899921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.900068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.900077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.462 [2024-10-13 17:43:36.900085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.462 [2024-10-13 17:43:36.902463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.462 [2024-10-13 17:43:36.911317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.462 [2024-10-13 17:43:36.911779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.912108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.462 [2024-10-13 17:43:36.912122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.462 [2024-10-13 17:43:36.912131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.462 [2024-10-13 17:43:36.912255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.462 [2024-10-13 17:43:36.912398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.462 [2024-10-13 17:43:36.912408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.463 [2024-10-13 17:43:36.912416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.463 [2024-10-13 17:43:36.914688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.463 [2024-10-13 17:43:36.923749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.463 [2024-10-13 17:43:36.924358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.924738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.924752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.463 [2024-10-13 17:43:36.924762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.463 [2024-10-13 17:43:36.924960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.463 [2024-10-13 17:43:36.925153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.463 [2024-10-13 17:43:36.925164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.463 [2024-10-13 17:43:36.925172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.463 [2024-10-13 17:43:36.927564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.463 [2024-10-13 17:43:36.936235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.463 [2024-10-13 17:43:36.936739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.937061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.937079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.463 [2024-10-13 17:43:36.937087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.463 [2024-10-13 17:43:36.937199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.463 [2024-10-13 17:43:36.937342] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.463 [2024-10-13 17:43:36.937351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.463 [2024-10-13 17:43:36.937359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.463 [2024-10-13 17:43:36.939703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.463 [2024-10-13 17:43:36.948604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.463 [2024-10-13 17:43:36.949008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.949383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.949396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.463 [2024-10-13 17:43:36.949404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.463 [2024-10-13 17:43:36.949528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.463 [2024-10-13 17:43:36.949653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.463 [2024-10-13 17:43:36.949662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.463 [2024-10-13 17:43:36.949669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.463 [2024-10-13 17:43:36.951975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.463 [2024-10-13 17:43:36.961100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.463 [2024-10-13 17:43:36.961464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.961766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.961778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.463 [2024-10-13 17:43:36.961786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.463 [2024-10-13 17:43:36.961983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.463 [2024-10-13 17:43:36.962149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.463 [2024-10-13 17:43:36.962159] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.463 [2024-10-13 17:43:36.962167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.463 [2024-10-13 17:43:36.964347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.463 [2024-10-13 17:43:36.973462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.463 [2024-10-13 17:43:36.974078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.974371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.463 [2024-10-13 17:43:36.974388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.463 [2024-10-13 17:43:36.974398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.463 [2024-10-13 17:43:36.974596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.463 [2024-10-13 17:43:36.974790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.463 [2024-10-13 17:43:36.974800] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.463 [2024-10-13 17:43:36.974808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.463 [2024-10-13 17:43:36.976858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.725 [2024-10-13 17:43:36.986024] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.725 [2024-10-13 17:43:36.986534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.725 [2024-10-13 17:43:36.986846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.725 [2024-10-13 17:43:36.986857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.725 [2024-10-13 17:43:36.986865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.725 [2024-10-13 17:43:36.987045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.725 [2024-10-13 17:43:36.987195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.725 [2024-10-13 17:43:36.987205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.725 [2024-10-13 17:43:36.987213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.725 [2024-10-13 17:43:36.989377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.725 [2024-10-13 17:43:36.998657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.725 [2024-10-13 17:43:36.999240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.725 [2024-10-13 17:43:36.999615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:36.999629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:36.999639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:36.999838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:36.999929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:36.999939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:36.999947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.002377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.011028] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.011504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.011825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.011836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.011844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.012041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.012226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.012241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.012249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.014467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.023651] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.024112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.024435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.024446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.024454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.024596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.024758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.024767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.024774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.027109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.036169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.036550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.036844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.036855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.036864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.037042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.037227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.037236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.037244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.039440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.048589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.049207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.049603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.049619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.049629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.049772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.049919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.049929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.049941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.052135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.061026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.061633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.062004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.062019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.062029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.062140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.062233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.062241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.062248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.064543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.073542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.074015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.074349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.074361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.074369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.074548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.074727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.074736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.074743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.077168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.085957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.086408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.086733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.086745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.086753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.086877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.087000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.087009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.087016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.089210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.098366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.098830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.099033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.099046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.099054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.099165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.726 [2024-10-13 17:43:37.099309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.726 [2024-10-13 17:43:37.099319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.726 [2024-10-13 17:43:37.099327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.726 [2024-10-13 17:43:37.101706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.726 [2024-10-13 17:43:37.110952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.726 [2024-10-13 17:43:37.111439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.111734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.726 [2024-10-13 17:43:37.111745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.726 [2024-10-13 17:43:37.111752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.726 [2024-10-13 17:43:37.111895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.112037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.112046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.112053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.114274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.123435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.123850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.124167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.124178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.124186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.124310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.124452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.124461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.124469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.126592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.136017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.136509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.136845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.136856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.136863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.137041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.137206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.137215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.137223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.139493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.148432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.148905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.149754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.149778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.149787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.149954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.150124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.150134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.150141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.152290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.160815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.161159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.161441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.161452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.161460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.161621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.161728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.161738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.161745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.163961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.173279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.173693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.173975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.173986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.173994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.174107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.174232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.174240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.174247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.176534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.185750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.186167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.187050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.187083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.187092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.187259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.187403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.187412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.187419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.189567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.198165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.198649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.198979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.198990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.198998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.199164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.199325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.727 [2024-10-13 17:43:37.199334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.727 [2024-10-13 17:43:37.199342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.727 [2024-10-13 17:43:37.201538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.727 [2024-10-13 17:43:37.210546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.727 [2024-10-13 17:43:37.210906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.211239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.727 [2024-10-13 17:43:37.211255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.727 [2024-10-13 17:43:37.211263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.727 [2024-10-13 17:43:37.211442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.727 [2024-10-13 17:43:37.211513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.728 [2024-10-13 17:43:37.211521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.728 [2024-10-13 17:43:37.211528] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.728 [2024-10-13 17:43:37.213729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.728 [2024-10-13 17:43:37.222941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.728 [2024-10-13 17:43:37.223489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.728 [2024-10-13 17:43:37.223830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.728 [2024-10-13 17:43:37.223845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.728 [2024-10-13 17:43:37.223854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.728 [2024-10-13 17:43:37.224034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.728 [2024-10-13 17:43:37.224207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.728 [2024-10-13 17:43:37.224217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.728 [2024-10-13 17:43:37.224225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.728 [2024-10-13 17:43:37.226519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.728 [2024-10-13 17:43:37.235350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.728 [2024-10-13 17:43:37.235897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.728 [2024-10-13 17:43:37.236705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.728 [2024-10-13 17:43:37.236732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.728 [2024-10-13 17:43:37.236743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.728 [2024-10-13 17:43:37.236924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.728 [2024-10-13 17:43:37.237053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.728 [2024-10-13 17:43:37.237072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.728 [2024-10-13 17:43:37.237080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.728 [2024-10-13 17:43:37.239320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.728 [2024-10-13 17:43:37.247888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.728 [2024-10-13 17:43:37.248273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.248658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.248670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.248683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.248844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.249005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.249014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.249022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.991 [2024-10-13 17:43:37.251173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.991 [2024-10-13 17:43:37.260364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.991 [2024-10-13 17:43:37.260855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.261195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.261212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.261221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.261346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.261475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.261484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.261491] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.991 [2024-10-13 17:43:37.263903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.991 [2024-10-13 17:43:37.273014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.991 [2024-10-13 17:43:37.273476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.273829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.273840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.273848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.273991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.274102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.274111] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.274118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.991 [2024-10-13 17:43:37.276352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.991 [2024-10-13 17:43:37.285367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.991 [2024-10-13 17:43:37.285815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.286142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.286154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.286162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.286309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.286433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.286442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.286449] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.991 [2024-10-13 17:43:37.288702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.991 [2024-10-13 17:43:37.297883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.991 [2024-10-13 17:43:37.298355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.298497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.298507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.298516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.298640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.298820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.298829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.298836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.991 [2024-10-13 17:43:37.301054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.991 [2024-10-13 17:43:37.310296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.991 [2024-10-13 17:43:37.310649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.310912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.310923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.310931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.311078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.311258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.311267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.311274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.991 [2024-10-13 17:43:37.313561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.991 [2024-10-13 17:43:37.322781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.991 [2024-10-13 17:43:37.323249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.323559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.991 [2024-10-13 17:43:37.323570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.991 [2024-10-13 17:43:37.323578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.991 [2024-10-13 17:43:37.323774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.991 [2024-10-13 17:43:37.323939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.991 [2024-10-13 17:43:37.323947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.991 [2024-10-13 17:43:37.323955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.326121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.335511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.335937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.336273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.336284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.336293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.336417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.336560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.336570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.336577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.338810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.347829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.348265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.348572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.348583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.348592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.348788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.348949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.348959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.348967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.351348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.360192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.360746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.361113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.361130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.361140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.361302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.361449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.361462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.361470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.363783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.372500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.373019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.373258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.373271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.373279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.373386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.373492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.373501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.373509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.375687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.384828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.385239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.385513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.385524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.385532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.385656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.385835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.385844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.385851] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.388010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.397237] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.397684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.397980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.397991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.397999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.398128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.398307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.398315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.398326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.400837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.409609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.410165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.410540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.410554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.410565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.410708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.410873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.410882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.410890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.413320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.422011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.422614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.422993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.423008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.423017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.423223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.423389] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.423398] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.423406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.425662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.434358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.434803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.435113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.435125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.435134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.435258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.435382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.435391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.435399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.992 [2024-10-13 17:43:37.437439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.992 [2024-10-13 17:43:37.447014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.992 [2024-10-13 17:43:37.447520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.447853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.992 [2024-10-13 17:43:37.447864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.992 [2024-10-13 17:43:37.447871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.992 [2024-10-13 17:43:37.448073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.992 [2024-10-13 17:43:37.448198] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.992 [2024-10-13 17:43:37.448207] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.992 [2024-10-13 17:43:37.448214] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.993 [2024-10-13 17:43:37.450377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.993 [2024-10-13 17:43:37.459609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.993 [2024-10-13 17:43:37.460108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.460312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.460322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.993 [2024-10-13 17:43:37.460331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.993 [2024-10-13 17:43:37.460477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.993 [2024-10-13 17:43:37.460621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.993 [2024-10-13 17:43:37.460630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.993 [2024-10-13 17:43:37.460637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.993 [2024-10-13 17:43:37.462767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.993 [2024-10-13 17:43:37.472051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.993 [2024-10-13 17:43:37.472620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.472950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.472965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.993 [2024-10-13 17:43:37.472974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.993 [2024-10-13 17:43:37.473145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.993 [2024-10-13 17:43:37.473292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.993 [2024-10-13 17:43:37.473301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.993 [2024-10-13 17:43:37.473309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.993 [2024-10-13 17:43:37.475476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.993 [2024-10-13 17:43:37.484538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.993 [2024-10-13 17:43:37.485100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.485348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.485362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.993 [2024-10-13 17:43:37.485372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.993 [2024-10-13 17:43:37.485534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.993 [2024-10-13 17:43:37.485701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.993 [2024-10-13 17:43:37.485710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.993 [2024-10-13 17:43:37.485718] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.993 [2024-10-13 17:43:37.488128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.993 [2024-10-13 17:43:37.497024] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.993 [2024-10-13 17:43:37.497331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.497661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.497672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.993 [2024-10-13 17:43:37.497680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.993 [2024-10-13 17:43:37.497768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.993 [2024-10-13 17:43:37.497928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.993 [2024-10-13 17:43:37.497937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.993 [2024-10-13 17:43:37.497944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.993 [2024-10-13 17:43:37.500164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.993 [2024-10-13 17:43:37.509542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.993 [2024-10-13 17:43:37.509960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.510276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.993 [2024-10-13 17:43:37.510289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:28.993 [2024-10-13 17:43:37.510297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:28.993 [2024-10-13 17:43:37.510476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:28.993 [2024-10-13 17:43:37.510675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.993 [2024-10-13 17:43:37.510684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.993 [2024-10-13 17:43:37.510691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.993 [2024-10-13 17:43:37.512766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.255 [2024-10-13 17:43:37.521819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.255 [2024-10-13 17:43:37.522391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.255 [2024-10-13 17:43:37.522766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.255 [2024-10-13 17:43:37.522780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.255 [2024-10-13 17:43:37.522790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.255 [2024-10-13 17:43:37.522896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.255 [2024-10-13 17:43:37.523043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.255 [2024-10-13 17:43:37.523052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.255 [2024-10-13 17:43:37.523060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.255 [2024-10-13 17:43:37.525236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.255 [2024-10-13 17:43:37.534116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.255 [2024-10-13 17:43:37.534651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.255 [2024-10-13 17:43:37.534993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.255 [2024-10-13 17:43:37.535008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.255 [2024-10-13 17:43:37.535018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.255 [2024-10-13 17:43:37.535224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.255 [2024-10-13 17:43:37.535372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.255 [2024-10-13 17:43:37.535381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.535389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.537774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.546700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.547188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.547555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.547569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.547579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.547740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.547850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.547860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.547867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.550244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.559072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.559679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.560054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.560081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.560091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.560216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.560381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.560390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.560398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.562657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.571598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.572088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.572470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.572485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.572495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.572693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.572839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.572848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.572856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.575029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.584037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.584587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.584925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.584939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.584949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.585083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.585212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.585221] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.585228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.587415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.596694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.597333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.597661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.597675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.597689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.597851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.597979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.597988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.597996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.600263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.609172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.609698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.610082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.610098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.610108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.610288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.610398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.610407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.610414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.612730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.621624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.622166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.622530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.622545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.622555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.622662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.622790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.622799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.622806] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.625111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.634160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.634765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.635142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.635158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.635168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.635334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.635480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.256 [2024-10-13 17:43:37.635489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.256 [2024-10-13 17:43:37.635497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.256 [2024-10-13 17:43:37.637611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.256 [2024-10-13 17:43:37.646636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.256 [2024-10-13 17:43:37.647072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.647397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.256 [2024-10-13 17:43:37.647408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.256 [2024-10-13 17:43:37.647416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.256 [2024-10-13 17:43:37.647559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.256 [2024-10-13 17:43:37.647738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.647747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.647754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.650026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.659388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.659971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.660293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.660310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.660319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.660499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.660664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.660674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.660681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.662961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.671986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.672545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.672873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.672888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.672898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.673104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.673237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.673247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.673254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.675404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.684684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.685170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.685461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.685472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.685480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.685587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.685693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.685701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.685708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.688054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.697023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.697500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.697830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.697844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.697854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.697979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.698151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.698161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.698169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.700608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.709544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.710139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.710485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.710499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.710509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.710671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.710799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.710812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.710821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.712959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.721959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.722577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.722908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.722922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.722932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.723101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.723267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.723277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.723284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.725488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.734469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.734958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.735248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.735260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.735268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.735466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.735609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.735618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.735625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.737968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.746940] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.747432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.747672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.747687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.747698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.747843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.747972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.747982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.747994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.749971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.759540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.760142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.760492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.760506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.760516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.257 [2024-10-13 17:43:37.760677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.257 [2024-10-13 17:43:37.760841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.257 [2024-10-13 17:43:37.760851] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.257 [2024-10-13 17:43:37.760858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.257 [2024-10-13 17:43:37.763069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.257 [2024-10-13 17:43:37.772070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.257 [2024-10-13 17:43:37.772671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.772902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.257 [2024-10-13 17:43:37.772918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.257 [2024-10-13 17:43:37.772928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.258 [2024-10-13 17:43:37.773118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.258 [2024-10-13 17:43:37.773320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.258 [2024-10-13 17:43:37.773330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.258 [2024-10-13 17:43:37.773338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.258 [2024-10-13 17:43:37.775560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.519 [2024-10-13 17:43:37.784540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.519 [2024-10-13 17:43:37.784985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.519 [2024-10-13 17:43:37.785251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.519 [2024-10-13 17:43:37.785264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.519 [2024-10-13 17:43:37.785273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.519 [2024-10-13 17:43:37.785453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.519 [2024-10-13 17:43:37.785597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.519 [2024-10-13 17:43:37.785607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.519 [2024-10-13 17:43:37.785614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.519 [2024-10-13 17:43:37.787913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.519 [2024-10-13 17:43:37.796874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.519 [2024-10-13 17:43:37.797398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.519 [2024-10-13 17:43:37.797774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.519 [2024-10-13 17:43:37.797789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.519 [2024-10-13 17:43:37.797799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.519 [2024-10-13 17:43:37.797960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.519 [2024-10-13 17:43:37.798052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.519 [2024-10-13 17:43:37.798061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.519 [2024-10-13 17:43:37.798079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.519 [2024-10-13 17:43:37.800301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.519 [2024-10-13 17:43:37.809192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.519 [2024-10-13 17:43:37.809772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.810151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.810167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.810177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.810284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.810467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.810476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.810483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.812724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.821638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.822162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.822504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.822519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.822529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.822690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.822837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.822846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.822854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.825249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.834110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.834681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.835046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.835060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.835079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.835241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.835351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.835361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.835368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.837625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.846445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.847024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.847340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.847356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.847365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.847545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.847710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.847719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.847727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.850171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.858866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.859353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.859668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.859679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.859687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.859885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.860069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.860078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.860085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.862337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.871395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.871973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.872300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.872316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.872326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.872525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.872671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.872681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.872688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.874840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.883858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.884375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.884705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.884719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.884729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.884872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.885084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.885095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.885103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.887289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.896299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.896906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.897284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.897300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.897310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.897527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.897728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.897737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.897745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.900081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.908792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.909359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.909735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.909754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.909764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.909926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.910054] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.910071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.910079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.912374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.921250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.520 [2024-10-13 17:43:37.921778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.922113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.520 [2024-10-13 17:43:37.922129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.520 [2024-10-13 17:43:37.922139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.520 [2024-10-13 17:43:37.922320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.520 [2024-10-13 17:43:37.922485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.520 [2024-10-13 17:43:37.922495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.520 [2024-10-13 17:43:37.922504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.520 [2024-10-13 17:43:37.925020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.520 [2024-10-13 17:43:37.933675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:37.934192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.934531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.934546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:37.934555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:37.934736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:37.934864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:37.934873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:37.934880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:37.937239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:37.946112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:37.946691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.947072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.947088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:37.947102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:37.947227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:37.947392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:37.947402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:37.947409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:37.949743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:37.958520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:37.958967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.959318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.959330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:37.959338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:37.959481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:37.959623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:37.959632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:37.959640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:37.961855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:37.971048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:37.971528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.971837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.971848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:37.971855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:37.971961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:37.972092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:37.972100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:37.972108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:37.974380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:37.983600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:37.984006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.984353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.984365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:37.984373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:37.984556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:37.984753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:37.984764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:37.984772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:37.987101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:37.996066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:37.996535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.996778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:37.996792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:37.996803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:37.996947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:37.997103] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:37.997113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:37.997121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:37.999344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:38.008691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:38.009300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:38.009675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:38.009690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:38.009700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:38.009862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:38.010045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:38.010055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:38.010070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:38.012421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:38.021079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:38.021535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:38.021835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:38.021846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:38.021855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:38.021997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:38.022115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:38.022125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:38.022132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:38.024228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.521 [2024-10-13 17:43:38.033521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.521 [2024-10-13 17:43:38.034009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:38.034338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.521 [2024-10-13 17:43:38.034350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.521 [2024-10-13 17:43:38.034358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.521 [2024-10-13 17:43:38.034519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.521 [2024-10-13 17:43:38.034643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.521 [2024-10-13 17:43:38.034652] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.521 [2024-10-13 17:43:38.034660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.521 [2024-10-13 17:43:38.036933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.045883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.046301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.046611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.046623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.784 [2024-10-13 17:43:38.046631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.784 [2024-10-13 17:43:38.046792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.784 [2024-10-13 17:43:38.046916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.784 [2024-10-13 17:43:38.046925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.784 [2024-10-13 17:43:38.046932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.784 [2024-10-13 17:43:38.049193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.058199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.058791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.059030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.059046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.784 [2024-10-13 17:43:38.059057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.784 [2024-10-13 17:43:38.059229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.784 [2024-10-13 17:43:38.059432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.784 [2024-10-13 17:43:38.059446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.784 [2024-10-13 17:43:38.059454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.784 [2024-10-13 17:43:38.061733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.070697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.071215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.071561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.071576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.784 [2024-10-13 17:43:38.071586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.784 [2024-10-13 17:43:38.071765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.784 [2024-10-13 17:43:38.071933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.784 [2024-10-13 17:43:38.071942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.784 [2024-10-13 17:43:38.071950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.784 [2024-10-13 17:43:38.074169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.083209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.083685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.084010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.084025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.784 [2024-10-13 17:43:38.084034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.784 [2024-10-13 17:43:38.084148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.784 [2024-10-13 17:43:38.084277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.784 [2024-10-13 17:43:38.084286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.784 [2024-10-13 17:43:38.084295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.784 [2024-10-13 17:43:38.086496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.095656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.096246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.096614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.096628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.784 [2024-10-13 17:43:38.096638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.784 [2024-10-13 17:43:38.096818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.784 [2024-10-13 17:43:38.096983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.784 [2024-10-13 17:43:38.096992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.784 [2024-10-13 17:43:38.097004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.784 [2024-10-13 17:43:38.099292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.108057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.108604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.108935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.108949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.784 [2024-10-13 17:43:38.108959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.784 [2024-10-13 17:43:38.109112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.784 [2024-10-13 17:43:38.109242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.784 [2024-10-13 17:43:38.109251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.784 [2024-10-13 17:43:38.109259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.784 [2024-10-13 17:43:38.111244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.784 [2024-10-13 17:43:38.120598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.784 [2024-10-13 17:43:38.121201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.784 [2024-10-13 17:43:38.121534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.121549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.121559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.121702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.121849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.121858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.121866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.124058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.133048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.133676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.133894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.133910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.133920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.134073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.134202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.134213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.134221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.136429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.145636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.146272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.146602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.146617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.146626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.146770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.146954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.146963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.146971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.149309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.158029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.158594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.158815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.158829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.158839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.158982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.159137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.159148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.159156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.161414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.170407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.170899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.171233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.171245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.171254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.171415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.171522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.171531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.171538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.173732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.182838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.183271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.183585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.183596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.183604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.183764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.183889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.183898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.183905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.185890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.195220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.195761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.196130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.196146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.196155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.196299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.196518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.196527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.196534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.198813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.207732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.208280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.208522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.208536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.208545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.208744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.208873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.208881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.208888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.785 [2024-10-13 17:43:38.211268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.785 [2024-10-13 17:43:38.220389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.785 [2024-10-13 17:43:38.220961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.221300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.785 [2024-10-13 17:43:38.221315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.785 [2024-10-13 17:43:38.221325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.785 [2024-10-13 17:43:38.221468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.785 [2024-10-13 17:43:38.221651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.785 [2024-10-13 17:43:38.221660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.785 [2024-10-13 17:43:38.221668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.223893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.786 [2024-10-13 17:43:38.232789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.786 [2024-10-13 17:43:38.233156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.233478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.233489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.786 [2024-10-13 17:43:38.233497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.786 [2024-10-13 17:43:38.233639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.786 [2024-10-13 17:43:38.233764] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.786 [2024-10-13 17:43:38.233774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.786 [2024-10-13 17:43:38.233781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.236059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.786 [2024-10-13 17:43:38.245236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.786 [2024-10-13 17:43:38.245828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.246157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.246173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.786 [2024-10-13 17:43:38.246182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.786 [2024-10-13 17:43:38.246363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.786 [2024-10-13 17:43:38.246527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.786 [2024-10-13 17:43:38.246537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.786 [2024-10-13 17:43:38.246545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.248605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.786 [2024-10-13 17:43:38.257822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.786 [2024-10-13 17:43:38.258441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.258815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.258834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.786 [2024-10-13 17:43:38.258844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.786 [2024-10-13 17:43:38.259072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.786 [2024-10-13 17:43:38.259219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.786 [2024-10-13 17:43:38.259229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.786 [2024-10-13 17:43:38.259237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.261460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.786 [2024-10-13 17:43:38.270368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.786 [2024-10-13 17:43:38.270912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.271255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.271272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.786 [2024-10-13 17:43:38.271282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.786 [2024-10-13 17:43:38.271425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.786 [2024-10-13 17:43:38.271608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.786 [2024-10-13 17:43:38.271618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.786 [2024-10-13 17:43:38.271626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.273951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.786 [2024-10-13 17:43:38.282999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.786 [2024-10-13 17:43:38.283576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.283914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.283928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.786 [2024-10-13 17:43:38.283938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.786 [2024-10-13 17:43:38.284145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.786 [2024-10-13 17:43:38.284292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.786 [2024-10-13 17:43:38.284302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.786 [2024-10-13 17:43:38.284309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.286550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.786 [2024-10-13 17:43:38.295438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.786 [2024-10-13 17:43:38.296003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.296345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.786 [2024-10-13 17:43:38.296361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:29.786 [2024-10-13 17:43:38.296375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:29.786 [2024-10-13 17:43:38.296555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:29.786 [2024-10-13 17:43:38.296720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.786 [2024-10-13 17:43:38.296730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.786 [2024-10-13 17:43:38.296737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.786 [2024-10-13 17:43:38.299092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.048 [2024-10-13 17:43:38.307977] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.048 [2024-10-13 17:43:38.308457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-10-13 17:43:38.308790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-10-13 17:43:38.308801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.048 [2024-10-13 17:43:38.308809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.048 [2024-10-13 17:43:38.308970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.048 [2024-10-13 17:43:38.309083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.048 [2024-10-13 17:43:38.309092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.048 [2024-10-13 17:43:38.309100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.048 [2024-10-13 17:43:38.311464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.048 [2024-10-13 17:43:38.320545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.048 [2024-10-13 17:43:38.321090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-10-13 17:43:38.321442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-10-13 17:43:38.321456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.048 [2024-10-13 17:43:38.321466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.048 [2024-10-13 17:43:38.321646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.048 [2024-10-13 17:43:38.321756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.048 [2024-10-13 17:43:38.321765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.048 [2024-10-13 17:43:38.321772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.048 [2024-10-13 17:43:38.324113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.048 [2024-10-13 17:43:38.332862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.048 [2024-10-13 17:43:38.333432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-10-13 17:43:38.333758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-10-13 17:43:38.333773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.048 [2024-10-13 17:43:38.333783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.048 [2024-10-13 17:43:38.333967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.048 [2024-10-13 17:43:38.334126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.334136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.334144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.336529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.345292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.345852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.346182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.346198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.346208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.346351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.346480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.346489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.346496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.348956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.357778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.358197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.358562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.358574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.358581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.358761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.358922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.358932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.358939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.361164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.370138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.370556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.370869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.370880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.370887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.371029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.371186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.371196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.371203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.373487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.382703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.383307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.383658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.383672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.383683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.383863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.384027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.384036] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.384044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.386295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.395251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.395710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.395988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.396000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.396008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.396176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.396337] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.396346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.396354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.398715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.407714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.408110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.408450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.408462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.408470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.408562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.408723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.408737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.408744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.410947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.420295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.421265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.421583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.421596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.421604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.421736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.421880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.421889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.421896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.423941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.432721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.433209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.433507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.433520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.433529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.433707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.433869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.433878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.049 [2024-10-13 17:43:38.433885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.049 [2024-10-13 17:43:38.436199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.049 [2024-10-13 17:43:38.445612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.049 [2024-10-13 17:43:38.446152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.446447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-10-13 17:43:38.446458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.049 [2024-10-13 17:43:38.446466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.049 [2024-10-13 17:43:38.446592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.049 [2024-10-13 17:43:38.446734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.049 [2024-10-13 17:43:38.446743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.446754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.448954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.458110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.458538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.458838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.458849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.458857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.458999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.459130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.459140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.459148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.461321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.470635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.471134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.471482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.471494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.471501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.471644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.471768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.471776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.471783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.473958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.483272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.483695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.484029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.484041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.484049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.484215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.484358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.484367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.484374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.486561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.495661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.496074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.496381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.496392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.496400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.496506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.496630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.496638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.496646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.498809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.508023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.508492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.508823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.508835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.508842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.508984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.509135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.509144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.509151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.511353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.520573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.521076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.521379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.521392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.521400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.521560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.521703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.521712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.521719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.523991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.533193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.533698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.534030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.534041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.534049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.534180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.534287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.534297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.534305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.536702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.545692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.546180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.546502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.546513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.546521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.546644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.546768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.546776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.546783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.548966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.050 [2024-10-13 17:43:38.558273] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.050 [2024-10-13 17:43:38.558654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.558982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-10-13 17:43:38.558994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.050 [2024-10-13 17:43:38.559002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.050 [2024-10-13 17:43:38.559149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.050 [2024-10-13 17:43:38.559293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.050 [2024-10-13 17:43:38.559301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.050 [2024-10-13 17:43:38.559308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.050 [2024-10-13 17:43:38.561327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.051 [2024-10-13 17:43:38.570780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.051 [2024-10-13 17:43:38.571264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.571595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.571607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.313 [2024-10-13 17:43:38.571615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.313 [2024-10-13 17:43:38.571740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.313 [2024-10-13 17:43:38.571865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.313 [2024-10-13 17:43:38.571874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.313 [2024-10-13 17:43:38.571882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.313 [2024-10-13 17:43:38.574132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.313 [2024-10-13 17:43:38.583219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.313 [2024-10-13 17:43:38.583607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.583914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.583925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.313 [2024-10-13 17:43:38.583933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.313 [2024-10-13 17:43:38.584116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.313 [2024-10-13 17:43:38.584260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.313 [2024-10-13 17:43:38.584268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.313 [2024-10-13 17:43:38.584275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.313 [2024-10-13 17:43:38.586564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.313 [2024-10-13 17:43:38.595696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.313 [2024-10-13 17:43:38.596112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.596395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.596407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.313 [2024-10-13 17:43:38.596414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.313 [2024-10-13 17:43:38.596520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.313 [2024-10-13 17:43:38.596627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.313 [2024-10-13 17:43:38.596636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.313 [2024-10-13 17:43:38.596644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.313 [2024-10-13 17:43:38.598828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.313 [2024-10-13 17:43:38.608410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.313 [2024-10-13 17:43:38.608808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.609107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.609123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.313 [2024-10-13 17:43:38.609131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.313 [2024-10-13 17:43:38.609310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.313 [2024-10-13 17:43:38.609506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.313 [2024-10-13 17:43:38.609515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.313 [2024-10-13 17:43:38.609522] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.313 [2024-10-13 17:43:38.611669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.313 [2024-10-13 17:43:38.620839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.313 [2024-10-13 17:43:38.621301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.621581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-13 17:43:38.621592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.313 [2024-10-13 17:43:38.621599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.313 [2024-10-13 17:43:38.621687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.313 [2024-10-13 17:43:38.621810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.313 [2024-10-13 17:43:38.621819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.313 [2024-10-13 17:43:38.621826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.313 [2024-10-13 17:43:38.624140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3413414 Killed "${NVMF_APP[@]}" "$@" 00:33:30.313 17:43:38 -- host/bdevperf.sh@36 -- # tgt_init 00:33:30.313 17:43:38 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:30.313 17:43:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:30.313 17:43:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:30.313 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:33:30.313 [2024-10-13 17:43:38.633487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.634084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.634370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.634386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.634396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.634595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.634743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.634752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.634760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.636842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 17:43:38 -- nvmf/common.sh@469 -- # nvmfpid=3414978 00:33:30.314 17:43:38 -- nvmf/common.sh@470 -- # waitforlisten 3414978 00:33:30.314 17:43:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:30.314 17:43:38 -- common/autotest_common.sh@819 -- # '[' -z 3414978 ']' 00:33:30.314 17:43:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.314 17:43:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:30.314 17:43:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.314 17:43:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:30.314 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:33:30.314 [2024-10-13 17:43:38.645923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.646417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.646753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.646765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.646773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.646880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.647060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.647076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.647084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.649411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.658354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.658723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.659028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.659039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.659046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.659193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.659337] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.659346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.659354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.661713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.670775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.671243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.671593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.671604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.671612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.671776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.671883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.671892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.671901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.674336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.683041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.683512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.683807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.683819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.683826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.683950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.684117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.684126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.684134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.686264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.687976] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:30.314 [2024-10-13 17:43:38.688021] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.314 [2024-10-13 17:43:38.695542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.695993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.696175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.696187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.696195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.696320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.696445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.696453] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.696460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.698588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.707952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.708422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.708708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.708719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.708730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.708855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.709015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.709023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.709030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 [2024-10-13 17:43:38.711199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.720595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.721070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.721266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.721278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.721286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.721410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.721553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.721562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.721569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.314 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.314 [2024-10-13 17:43:38.723736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.314 [2024-10-13 17:43:38.733144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.314 [2024-10-13 17:43:38.733747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.734081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-13 17:43:38.734096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.314 [2024-10-13 17:43:38.734106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.314 [2024-10-13 17:43:38.734288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.314 [2024-10-13 17:43:38.734416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.314 [2024-10-13 17:43:38.734425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.314 [2024-10-13 17:43:38.734433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.736875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.745645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.746098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.746446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.746458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.746470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.746613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.746702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.746711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.746718] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.748791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.758089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.758449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.758782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.758793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.758801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.758925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.759050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.759059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.759071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.761414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.770481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.770924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.771255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.771267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.771275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.771399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.771487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.771495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.771503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.772238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:30.315 [2024-10-13 17:43:38.773774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.783178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.783644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.783964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.783975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.783983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.784191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.784336] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.784344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.784352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.786351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.795538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.795912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.796107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.796119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.796128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.796253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.796379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.796388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.796396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.798829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.799486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:30.315 [2024-10-13 17:43:38.799572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.315 [2024-10-13 17:43:38.799578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.315 [2024-10-13 17:43:38.799583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.315 [2024-10-13 17:43:38.799683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.315 [2024-10-13 17:43:38.799841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.315 [2024-10-13 17:43:38.799843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.315 [2024-10-13 17:43:38.808121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.808772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.809020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.809035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.809045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.809224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.809374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.809384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.809392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.811651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.820651] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.821099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.821284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.821294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.821303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.821465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.821590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.821600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.821607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.315 [2024-10-13 17:43:38.823772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.315 [2024-10-13 17:43:38.833187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.315 [2024-10-13 17:43:38.833546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.833629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-13 17:43:38.833638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.315 [2024-10-13 17:43:38.833646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.315 [2024-10-13 17:43:38.833807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.315 [2024-10-13 17:43:38.833932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.315 [2024-10-13 17:43:38.833940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.315 [2024-10-13 17:43:38.833948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.578 [2024-10-13 17:43:38.836151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.578 [2024-10-13 17:43:38.845667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.578 [2024-10-13 17:43:38.846168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.578 [2024-10-13 17:43:38.846369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.578 [2024-10-13 17:43:38.846380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.578 [2024-10-13 17:43:38.846388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.578 [2024-10-13 17:43:38.846494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.578 [2024-10-13 17:43:38.846620] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.578 [2024-10-13 17:43:38.846630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.578 [2024-10-13 17:43:38.846638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.578 [2024-10-13 17:43:38.848835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.578 [2024-10-13 17:43:38.858259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.578 [2024-10-13 17:43:38.858623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.578 [2024-10-13 17:43:38.858944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.578 [2024-10-13 17:43:38.858955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.578 [2024-10-13 17:43:38.858963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.578 [2024-10-13 17:43:38.859110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.578 [2024-10-13 17:43:38.859254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.578 [2024-10-13 17:43:38.859262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.578 [2024-10-13 17:43:38.859269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.578 [2024-10-13 17:43:38.861521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.578 [2024-10-13 17:43:38.870493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.578 [2024-10-13 17:43:38.870933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.578 [2024-10-13 17:43:38.871104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.871119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.871130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.871316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.871445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.871456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.871463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.873612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.882859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.883216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.883430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.883444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.883454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.883618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.883783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.883793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.883801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.886159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.895303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.895660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.895987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.895999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.896007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.896155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.896280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.896289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.896297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.898552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.907692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.908067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.908247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.908260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.908268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.908356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.908518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.908527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.908535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.910790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.920295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.920888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.921257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.921275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.921285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.921447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.921594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.921604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.921612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.923963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.932924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.933413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.933751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.933770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.933778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.933959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.934107] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.934118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.934125] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.936325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.945318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.945832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.946147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.946160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.946167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.946328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.946434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.946442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.946449] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.948594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.957574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.958014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.958279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.958295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.958306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.958468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.958615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.958624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.958632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.960733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.970044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.970487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.970849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.970860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.970876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.970983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.971130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.971140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.971147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.973211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.579 [2024-10-13 17:43:38.982597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.579 [2024-10-13 17:43:38.982903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.983075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.579 [2024-10-13 17:43:38.983086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.579 [2024-10-13 17:43:38.983094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.579 [2024-10-13 17:43:38.983219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.579 [2024-10-13 17:43:38.983361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.579 [2024-10-13 17:43:38.983370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.579 [2024-10-13 17:43:38.983378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.579 [2024-10-13 17:43:38.985521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:38.995251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:38.995627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:38.995793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:38.995803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:38.995811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:38.995880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:38.996041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:38.996050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:38.996057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:38.998294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.007675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.008131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.008311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.008325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.008333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.008462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.008605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.008615] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.008622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.010857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.020179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.020504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.020704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.020715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.020722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.020864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.021044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.021052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.021060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.023280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.032545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.032997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.033251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.033266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.033276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.033420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.033530] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.033540] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.033547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.035953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.044772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.045269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.045576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.045587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.045595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.045756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.045922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.045930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.045938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.048251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.057220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.057790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.058176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.058193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.058203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.058383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.058512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.058521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.058529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.060937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.069761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.070220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.070545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.070556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.070564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.070671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.070814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.070823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.070830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.073244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.082248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.082701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.082883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.082894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.082901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.083025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.083156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.083171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.083179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.085578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.580 [2024-10-13 17:43:39.094790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.580 [2024-10-13 17:43:39.095337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.095685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.580 [2024-10-13 17:43:39.095700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.580 [2024-10-13 17:43:39.095710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.580 [2024-10-13 17:43:39.095891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.580 [2024-10-13 17:43:39.096019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.580 [2024-10-13 17:43:39.096029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.580 [2024-10-13 17:43:39.096036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.580 [2024-10-13 17:43:39.098209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.107253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.107762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.108093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.108107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.108115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.108223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.108383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.108392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.108400] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.110833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.119692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.120194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.120498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.120510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.120518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.120679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.120822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.120830] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.120842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.123058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.132208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.132616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.132957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.132971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.132981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.133170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.133263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.133271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.133280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.135575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.144701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.145367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.145741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.145756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.145765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.145927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.146119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.146129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.146137] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.148557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.157421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.157845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.158188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.158200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.158208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.158351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.158494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.158503] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.158511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.160481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.170012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.170529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.170872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.170883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.170891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.170997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.171126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.171134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.171141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.173438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.182516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.182948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.183274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.183286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.183294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.183474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.183598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.183607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.183614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.185812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.194912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.195383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.195617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.195631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.195640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.195803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.195969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.195979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.195986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.198396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.207266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.207777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.208176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.208192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.208202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.843 [2024-10-13 17:43:39.208363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.843 [2024-10-13 17:43:39.208473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.843 [2024-10-13 17:43:39.208482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.843 [2024-10-13 17:43:39.208490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.843 [2024-10-13 17:43:39.211057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.843 [2024-10-13 17:43:39.219844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.843 [2024-10-13 17:43:39.220423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.220761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.843 [2024-10-13 17:43:39.220776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.843 [2024-10-13 17:43:39.220786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.220965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.221121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.221131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.221138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.223235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.232442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.233105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.233305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.233320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.233330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.233510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.233639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.233649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.233656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.235901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.245070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.245674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.246022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.246037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.246047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.246197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.246309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.246318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.246327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.248530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.257387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.257670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.258034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.258045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.258053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.258237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.258398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.258407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.258414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.260847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.269775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.270362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.270733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.270748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.270758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.270901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.271078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.271088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.271096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.273523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.282225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.282718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.283039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.283055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.283068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.283212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.283318] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.283327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.283334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.285679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.294574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.295153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.295553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.295568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.295578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.295685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.295849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.295858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.295866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.298133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.306909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.307315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.307649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.307661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.307669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.307793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.307972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.307982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.307990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.310231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.319399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.319636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.319960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.319972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.844 [2024-10-13 17:43:39.319984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.844 [2024-10-13 17:43:39.320151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.844 [2024-10-13 17:43:39.320313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.844 [2024-10-13 17:43:39.320322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.844 [2024-10-13 17:43:39.320330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.844 [2024-10-13 17:43:39.322382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.844 [2024-10-13 17:43:39.331942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.844 [2024-10-13 17:43:39.332442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.332643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.844 [2024-10-13 17:43:39.332654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.845 [2024-10-13 17:43:39.332661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.845 [2024-10-13 17:43:39.332821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.845 [2024-10-13 17:43:39.333000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.845 [2024-10-13 17:43:39.333009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.845 [2024-10-13 17:43:39.333016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.845 [2024-10-13 17:43:39.335406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.845 [2024-10-13 17:43:39.344173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.845 [2024-10-13 17:43:39.344615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.845 [2024-10-13 17:43:39.344972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.845 [2024-10-13 17:43:39.344984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.845 [2024-10-13 17:43:39.344991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.845 [2024-10-13 17:43:39.345177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.845 [2024-10-13 17:43:39.345357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.845 [2024-10-13 17:43:39.345366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.845 [2024-10-13 17:43:39.345374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.845 [2024-10-13 17:43:39.347788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.845 [2024-10-13 17:43:39.356857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.845 [2024-10-13 17:43:39.357425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.845 [2024-10-13 17:43:39.357775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.845 [2024-10-13 17:43:39.357790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:30.845 [2024-10-13 17:43:39.357799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:30.845 [2024-10-13 17:43:39.357929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:30.845 [2024-10-13 17:43:39.358083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.845 [2024-10-13 17:43:39.358093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.845 [2024-10-13 17:43:39.358101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.845 [2024-10-13 17:43:39.360286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.106 [2024-10-13 17:43:39.369245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.106 [2024-10-13 17:43:39.369893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.106 [2024-10-13 17:43:39.370149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.106 [2024-10-13 17:43:39.370165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.106 [2024-10-13 17:43:39.370175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.370301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.370466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.370476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.370484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.372680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.381741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.382204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.382533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.382545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.382553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.382677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.382803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.382812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.382819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.384907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.394234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.394811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.395188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.395205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.395215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.395359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.395491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.395501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.395508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.397734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.406905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.407482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.407842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.407857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.407867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.408011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.408146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.408156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.408164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.410677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.419429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.420042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.420391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.420406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.420416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.420559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.420705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.420714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.420722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.422982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.431978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.432522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.432745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.432760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.432770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.432913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.433005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.433021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.433029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.435168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.444770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.445273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.445634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.445645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.445653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.445760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.445866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.445874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.445882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.447916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 17:43:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:31.107 17:43:39 -- common/autotest_common.sh@852 -- # return 0 00:33:31.107 17:43:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:31.107 [2024-10-13 17:43:39.457106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 17:43:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:31.107 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.107 [2024-10-13 17:43:39.457549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.457746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.457757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.457765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.457925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.458050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.458059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.458072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.460215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.469395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.107 [2024-10-13 17:43:39.469881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.470213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.107 [2024-10-13 17:43:39.470225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.107 [2024-10-13 17:43:39.470233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.107 [2024-10-13 17:43:39.470338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.107 [2024-10-13 17:43:39.470521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.107 [2024-10-13 17:43:39.470533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.107 [2024-10-13 17:43:39.470544] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.107 [2024-10-13 17:43:39.472627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.107 [2024-10-13 17:43:39.481931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.482346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.482690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.482702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.482710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.482816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.482940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.482950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.482957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 [2024-10-13 17:43:39.485268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 [2024-10-13 17:43:39.494471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.494870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.495202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.495214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.495222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.495346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.495487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.495495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.495503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 17:43:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.108 [2024-10-13 17:43:39.497682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 17:43:39 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.108 17:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.108 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.108 [2024-10-13 17:43:39.504963] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.108 [2024-10-13 17:43:39.506868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.507364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.507749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.507763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.507777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.507976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.508111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.508121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.508128] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 17:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.108 17:43:39 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.108 [2024-10-13 17:43:39.510457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 17:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.108 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.108 [2024-10-13 17:43:39.519468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.520060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.520266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.520280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.520290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.520415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.520544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.520555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.520562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 [2024-10-13 17:43:39.522895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 [2024-10-13 17:43:39.531990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.532416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.532743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.532755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.532763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.532943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.533074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.533083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.533090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 [2024-10-13 17:43:39.535250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 Malloc0 00:33:31.108 17:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.108 17:43:39 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.108 17:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.108 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.108 [2024-10-13 17:43:39.544625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.545114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.545455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.545465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.545474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.545617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.545705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.545714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.545721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 [2024-10-13 17:43:39.548045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 17:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.108 17:43:39 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.108 17:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.108 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.108 [2024-10-13 17:43:39.557186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 [2024-10-13 17:43:39.557693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.557908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.108 [2024-10-13 17:43:39.557918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9c160 with addr=10.0.0.2, port=4420 00:33:31.108 [2024-10-13 17:43:39.557926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9c160 is same with the state(5) to be set 00:33:31.108 [2024-10-13 17:43:39.558074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c160 (9): Bad file descriptor 00:33:31.108 [2024-10-13 17:43:39.558182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.108 [2024-10-13 17:43:39.558192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.108 [2024-10-13 17:43:39.558200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.108 [2024-10-13 17:43:39.560489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.108 17:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.108 17:43:39 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.108 17:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.108 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.108 [2024-10-13 17:43:39.568744] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.108 [2024-10-13 17:43:39.569673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.108 17:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.108 17:43:39 -- host/bdevperf.sh@38 -- # wait 3413845 00:33:31.109 [2024-10-13 17:43:39.598357] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:41.107 00:33:41.107 Latency(us) 00:33:41.107 [2024-10-13T15:43:49.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.107 [2024-10-13T15:43:49.631Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:41.107 Verification LBA range: start 0x0 length 0x4000 00:33:41.107 Nvme1n1 : 15.00 14533.36 56.77 14929.01 0.00 4329.75 764.59 13434.88 00:33:41.107 [2024-10-13T15:43:49.631Z] =================================================================================================================== 00:33:41.107 [2024-10-13T15:43:49.631Z] Total : 14533.36 56.77 14929.01 0.00 4329.75 764.59 13434.88 00:33:41.107 17:43:48 -- host/bdevperf.sh@39 -- # sync 00:33:41.107 17:43:48 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:41.107 17:43:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.107 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:33:41.107 17:43:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.107 17:43:48 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:41.107 17:43:48 -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:41.107 17:43:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:41.107 17:43:48 -- nvmf/common.sh@116 -- # sync 00:33:41.107 17:43:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:41.107 17:43:48 -- nvmf/common.sh@119 -- # set +e 00:33:41.107 17:43:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:41.107 17:43:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:41.107 rmmod nvme_tcp 00:33:41.107 rmmod nvme_fabrics 00:33:41.107 rmmod nvme_keyring 00:33:41.107 17:43:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:41.107 17:43:48 -- nvmf/common.sh@123 -- # set -e 00:33:41.107 17:43:48 -- nvmf/common.sh@124 -- # return 0 00:33:41.107 17:43:48 -- nvmf/common.sh@477 -- # '[' -n 3414978 ']' 00:33:41.107 17:43:48 -- nvmf/common.sh@478 -- # killprocess 3414978 00:33:41.107 17:43:48 -- common/autotest_common.sh@926 -- # '[' -z 3414978 ']' 00:33:41.107 17:43:48 -- common/autotest_common.sh@930 -- # kill -0 3414978 00:33:41.107 17:43:48 -- common/autotest_common.sh@931 -- # uname 00:33:41.107 17:43:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:41.107 17:43:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3414978 00:33:41.107 17:43:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:41.107 17:43:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:41.107 17:43:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3414978' 00:33:41.107 killing process with pid 3414978 00:33:41.107 17:43:48 -- common/autotest_common.sh@945 -- # kill 3414978 00:33:41.107 17:43:48 -- common/autotest_common.sh@950 -- # wait 3414978 00:33:41.107 17:43:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:41.107 17:43:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:41.107 17:43:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:41.107 17:43:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:41.107 17:43:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:41.107 17:43:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.107 17:43:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.107 17:43:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.049 17:43:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:42.049 00:33:42.049 real 0m27.756s 00:33:42.049 user 1m2.752s 00:33:42.049 sys 0m7.293s 00:33:42.049 17:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:42.049 17:43:50 -- common/autotest_common.sh@10 -- # set +x 00:33:42.049 ************************************ 00:33:42.049 END TEST nvmf_bdevperf 00:33:42.049 ************************************ 00:33:42.049 17:43:50 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:42.049 17:43:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:42.049 17:43:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:42.049 17:43:50 -- common/autotest_common.sh@10 -- # set +x 00:33:42.049 ************************************ 00:33:42.049 START TEST nvmf_target_disconnect 00:33:42.049 ************************************ 00:33:42.049 17:43:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:42.311 * Looking for test storage... 00:33:42.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:42.311 17:43:50 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.311 17:43:50 -- nvmf/common.sh@7 -- # uname -s 00:33:42.311 17:43:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.311 17:43:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.311 17:43:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.311 17:43:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.311 17:43:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.311 17:43:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.311 17:43:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.311 17:43:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.311 17:43:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.311 17:43:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.311 17:43:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:42.311 17:43:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:42.311 17:43:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.311 17:43:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.311 17:43:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.311 17:43:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.311 17:43:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.311 17:43:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.311 17:43:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.311 17:43:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.311 17:43:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.311 17:43:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.311 17:43:50 -- paths/export.sh@5 -- # export PATH 00:33:42.311 17:43:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.311 17:43:50 -- nvmf/common.sh@46 -- # : 0 00:33:42.311 17:43:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:42.311 17:43:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:42.311 17:43:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:42.311 17:43:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.311 17:43:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.311 17:43:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:42.311 17:43:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:42.311 17:43:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:42.311 17:43:50 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:42.311 17:43:50 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:42.311 17:43:50 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:42.311 17:43:50 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:42.311 17:43:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:42.311 17:43:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.311 17:43:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:42.311 17:43:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:42.311 17:43:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:42.311 17:43:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.311 17:43:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.311 17:43:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.311 17:43:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:42.312 17:43:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:42.312 17:43:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:42.312 17:43:50 -- common/autotest_common.sh@10 -- # set +x 00:33:50.457 17:43:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:50.457 17:43:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:50.457 17:43:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:50.457 17:43:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:50.457 17:43:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:50.457 17:43:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:50.457 17:43:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:50.457 17:43:57 -- nvmf/common.sh@294 -- # net_devs=() 00:33:50.457 17:43:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:50.457 17:43:57 -- nvmf/common.sh@295 -- # e810=() 00:33:50.457 17:43:57 -- nvmf/common.sh@295 -- # local -ga e810 00:33:50.457 17:43:57 -- nvmf/common.sh@296 -- # x722=() 00:33:50.457 17:43:57 -- nvmf/common.sh@296 -- # local -ga x722 00:33:50.458 17:43:57 -- nvmf/common.sh@297 -- # mlx=() 00:33:50.458 17:43:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:50.458 17:43:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.458 17:43:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:50.458 17:43:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:50.458 17:43:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:50.458 17:43:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:50.458 17:43:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:50.458 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:50.458 17:43:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:50.458 17:43:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:50.458 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:50.458 17:43:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:50.458 17:43:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:50.458 17:43:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.458 17:43:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:50.458 17:43:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.458 17:43:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:50.458 Found net devices under 0000:31:00.0: cvl_0_0 00:33:50.458 17:43:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.458 17:43:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:50.458 17:43:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.458 17:43:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:50.458 17:43:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.458 17:43:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:50.458 Found net devices under 0000:31:00.1: cvl_0_1 00:33:50.458 17:43:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.458 17:43:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:50.458 17:43:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:50.458 17:43:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:50.458 17:43:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:50.458 17:43:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.458 17:43:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.458 17:43:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.458 17:43:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:50.458 17:43:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.458 17:43:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.458 17:43:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:50.458 17:43:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.458 17:43:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.458 17:43:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:50.458 17:43:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:50.458 17:43:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.458 17:43:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.458 17:43:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.458 17:43:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.458 17:43:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:50.458 17:43:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.458 17:43:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.458 17:43:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.458 17:43:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:50.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:33:50.458 00:33:50.458 --- 10.0.0.2 ping statistics --- 00:33:50.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.458 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:33:50.458 17:43:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:50.458 00:33:50.458 --- 10.0.0.1 ping statistics --- 00:33:50.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.458 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:50.458 17:43:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.458 17:43:58 -- nvmf/common.sh@410 -- # return 0 00:33:50.458 17:43:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:50.458 17:43:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.458 17:43:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:50.458 17:43:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:50.458 17:43:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.458 17:43:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:50.458 17:43:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:50.458 17:43:58 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:50.458 17:43:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:50.458 17:43:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:50.458 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:50.458 ************************************ 00:33:50.458 START TEST nvmf_target_disconnect_tc1 00:33:50.458 ************************************ 00:33:50.458 17:43:58 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:33:50.458 17:43:58 -- host/target_disconnect.sh@32 -- # set +e 00:33:50.458 17:43:58 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:50.458 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.458 [2024-10-13 17:43:58.207412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-10-13 17:43:58.207817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-10-13 17:43:58.207834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73830 with addr=10.0.0.2, port=4420 00:33:50.458 [2024-10-13 17:43:58.207862] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:50.458 [2024-10-13 17:43:58.207877] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:50.458 [2024-10-13 17:43:58.207885] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:50.458 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:50.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:50.458 Initializing NVMe Controllers 00:33:50.458 17:43:58 -- host/target_disconnect.sh@33 -- # trap - ERR 00:33:50.458 17:43:58 -- host/target_disconnect.sh@33 -- # print_backtrace 00:33:50.458 17:43:58 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:33:50.458 17:43:58 -- common/autotest_common.sh@1132 -- # return 0 00:33:50.458 17:43:58 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:33:50.458 17:43:58 -- host/target_disconnect.sh@41 -- # set -e 00:33:50.458 00:33:50.458 real 0m0.099s 00:33:50.458 user 0m0.045s 00:33:50.458 sys 0m0.053s 00:33:50.458 17:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:50.458 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:50.458 ************************************ 00:33:50.458 END TEST nvmf_target_disconnect_tc1 00:33:50.458 ************************************ 00:33:50.458 17:43:58 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:50.459 17:43:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:50.459 17:43:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:50.459 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:50.459 ************************************ 00:33:50.459 START TEST nvmf_target_disconnect_tc2 00:33:50.459 ************************************ 00:33:50.459 17:43:58 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:33:50.459 17:43:58 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:33:50.459 17:43:58 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:50.459 17:43:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:50.459 17:43:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:50.459 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:50.459 17:43:58 -- nvmf/common.sh@469 -- # nvmfpid=3421050 00:33:50.459 17:43:58 -- nvmf/common.sh@470 -- # waitforlisten 3421050 00:33:50.459 17:43:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:50.459 17:43:58 -- common/autotest_common.sh@819 -- # '[' -z 3421050 ']' 00:33:50.459 17:43:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.459 17:43:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:50.459 17:43:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.459 17:43:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:50.459 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:50.459 [2024-10-13 17:43:58.330373] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:50.459 [2024-10-13 17:43:58.330422] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.459 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.459 [2024-10-13 17:43:58.415854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:50.459 [2024-10-13 17:43:58.454816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:50.459 [2024-10-13 17:43:58.454971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.459 [2024-10-13 17:43:58.454981] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.459 [2024-10-13 17:43:58.454988] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.459 [2024-10-13 17:43:58.455138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:50.459 [2024-10-13 17:43:58.455315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:50.459 [2024-10-13 17:43:58.455477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:50.459 [2024-10-13 17:43:58.455478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:50.721 17:43:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:50.721 17:43:59 -- common/autotest_common.sh@852 -- # return 0 00:33:50.721 17:43:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:50.721 17:43:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.721 17:43:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.721 17:43:59 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:50.721 17:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.721 Malloc0 00:33:50.721 17:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.721 17:43:59 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:50.721 17:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.721 [2024-10-13 17:43:59.190667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.721 17:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.721 17:43:59 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.721 17:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.721 17:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.721 17:43:59 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.721 17:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.721 17:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.721 17:43:59 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.721 17:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.721 [2024-10-13 17:43:59.231129] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.721 17:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.721 17:43:59 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:50.721 17:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.721 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:33:50.982 17:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.982 17:43:59 -- host/target_disconnect.sh@50 -- # reconnectpid=3421376 00:33:50.982 17:43:59 -- host/target_disconnect.sh@52 -- # sleep 2 00:33:50.982 17:43:59 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:50.982 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.901 17:44:01 -- host/target_disconnect.sh@53 -- # kill -9 3421050 00:33:52.901 17:44:01 -- host/target_disconnect.sh@55 -- # sleep 2 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Write completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 Read completed with error (sct=0, sc=8) 00:33:52.901 starting I/O failed 00:33:52.901 [2024-10-13 17:44:01.263504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.901 [2024-10-13 17:44:01.263892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.264413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.264452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.264759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.264965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.264977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.265356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.265694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.265710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.266313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.266692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.266707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.266922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.267369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.267407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.267603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.267908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.267921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.268159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.268394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.268407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.268706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.269016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.269028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.269239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.269540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.269552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.269735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.269897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.269910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.270246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.270426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.270438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.270690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.270982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.270993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.271260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.271428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.271440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.271748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.272080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.272093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.272404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.272716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.272727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.272908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.273247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.273259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.273616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.273913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.273925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.274114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.274490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.274502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.274837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.275179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.275190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.275503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.275767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.275778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.276069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.276373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.276384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.276657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.276977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.276989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.277306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.277526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.277537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.277851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.278174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.278185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.278501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.278746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.278756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.279079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.279363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.279373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.279658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.279973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.279984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.280281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.280625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.280635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.901 [2024-10-13 17:44:01.280960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.281368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.901 [2024-10-13 17:44:01.281379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.901 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.281705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.281998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.282009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.282313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.282625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.282635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.282921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.283189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.283201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.283533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.283872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.283883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.284190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.284526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.284536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.284862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.285176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.285195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.285488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.285778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.285790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.286125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.286479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.286490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.286825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.287121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.287132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.287448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.287736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.287746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.288036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.288368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.288379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.288710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.289049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.289059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.289381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.289695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.289706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.290029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.290361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.290373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.290693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.291007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.291018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.291294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.291620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.291632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.291986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.292289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.292303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.292600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.292740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.292754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.293058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.293394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.293407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.293733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.294053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.294072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.294234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.294430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.294444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.294711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.295010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.295023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.295292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.295569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.295581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.295875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.296203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.296216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.296582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.296848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.296861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.297191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.297503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.297516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.297851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.298166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.298179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.298504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.298798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.298812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.299140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.299488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.299501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.299796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.299999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.300012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.300208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.300532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.300545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.300870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.301192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.301206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.301537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.301736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.301749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.302076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.302383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.302396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.302577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.302851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.302864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.303163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.303452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.303466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.303778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.304050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.304067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.304346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.304603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.304624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.304978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.305292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.305309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.305616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.305900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.305918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.306219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.306534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.306552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.306886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.307222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.307241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.307543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.307856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.307873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.308219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.308546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.308564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.308861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.309049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.309071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.309405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.309713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.309730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.310038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.310342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.310360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.310640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.310824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.310844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.902 qpair failed and we were unable to recover it. 00:33:52.902 [2024-10-13 17:44:01.311155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.902 [2024-10-13 17:44:01.311489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.311506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.311765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.312101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.312118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.312322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.312633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.312649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.312961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.313226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.313244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.313539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.313850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.313867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.314201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.314523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.314541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.314845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.315226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.315244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.315544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.315859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.315876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.316203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.316528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.316545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.316865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.317191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.317217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.317527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.317862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.317884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.318127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.318474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.318496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.318718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.319096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.319119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.319444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.319775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.319797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.320150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.320372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.320392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.320707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.321050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.321078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.321374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.321717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.321739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.322048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.322400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.322423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.322746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.323092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.323114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.323463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.323805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.323831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.324129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.324449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.324471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.324800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.325138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.325160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.325512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.325838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.325859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.326193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.326539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.326560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.326930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.327263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.327286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.327650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.327996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.328025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.328380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.328722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.328752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.329094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.329433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.329462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.329819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.330047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.330091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.330411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.330803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.330830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.331236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.331464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.331502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.331809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.332145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.332174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.332519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.332729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.332759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.333090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.333436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.333465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.333807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.334128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.334158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.334521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.334856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.334885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.335215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.335554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.335583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.335921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.336225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.336255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.336507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.336843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.336872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.903 qpair failed and we were unable to recover it. 00:33:52.903 [2024-10-13 17:44:01.337221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.903 [2024-10-13 17:44:01.337562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.337590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.337939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.338284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.338314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.338672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.339016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.339045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.339398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.339740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.339768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.340124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.340490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.340518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.340875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.341224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.341254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.341595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.341929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.341960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.342296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.342637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.342664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.343025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.343345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.343374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.343723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.344107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.344137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.344487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.344837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.344866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.345244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.345585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.345614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.345970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.346353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.346383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.346734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.347079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.347108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.347478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.347826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.347854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.348202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.348452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.348481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.348849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.349190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.349220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.349616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.349952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.349982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.350299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.350647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.350675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.351033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.351347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.351377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.351742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.352061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.904 [2024-10-13 17:44:01.352102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.904 qpair failed and we were unable to recover it. 00:33:52.904 [2024-10-13 17:44:01.352476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.352679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.352708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.353127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.353488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.353517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.353858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.354213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.354243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.354634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.354827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.354855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.355172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.355511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.355540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.355868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.355955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.355982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.356397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.356696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.356725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.356884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.357190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.357220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.357542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.357905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.357934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.358300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.358548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.358576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.358940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.359283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.359312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.359644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.359975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.360004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.360363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.360716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.360744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.360969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.361227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.361256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.361606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.361956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.361985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.362314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.362659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.362689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.362929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.363272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.363301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.363661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.364008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.364036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.364407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.364748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.364777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.365180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.365398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.365426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.365800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.366098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.366128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.366483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.366825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.366854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.367191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.367536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.367564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.367910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.368142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.368170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.368414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.368749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.368779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.369104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.369455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.369483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.905 qpair failed and we were unable to recover it. 00:33:52.905 [2024-10-13 17:44:01.369844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.905 [2024-10-13 17:44:01.370202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.370232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.370586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.370924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.370952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.371295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.371637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.371667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.372023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.372366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.372397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.372740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.373085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.373115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.373333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.373689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.373718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.374085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.374439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.374468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.374809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.375133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.375162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.375495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.375840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.375867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.376228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.376567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.376596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.376827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.377135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.377165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.377536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.377877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.377905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.378245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.378598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.378626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.378964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.379310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.379340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.379695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.380041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.380078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.380416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.380762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.380790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.381129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.381467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.381496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.381851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.382207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.382236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.382589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.382937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.382965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.383310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.383652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.383681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.383918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.384275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.384304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.384658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.384978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.385007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.385244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.385589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.385619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.385872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.386107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.386139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.386506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.386849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.386879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.387261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.387595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.387623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.387950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.388292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.906 [2024-10-13 17:44:01.388321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.906 qpair failed and we were unable to recover it. 00:33:52.906 [2024-10-13 17:44:01.388648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.388986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.389014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.389377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.389747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.389775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.390126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.390492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.390522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.390877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.391218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.391248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.391599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.391944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.391973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.392308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.392655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.392683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.393030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.393257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.393287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.393637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.393972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.394001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.394209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.394419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.394451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.394681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.395043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.395081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.395421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.395644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.395675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.395996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.396341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.396371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.396733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.397083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.397113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.397502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.397839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.397867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.398196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.398547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.398575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.398919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.399316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.399345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.399670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.400017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.400045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.400417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.400794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.400822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.401041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.401311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.401340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.401681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.401891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.401920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.402242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.402598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.402627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.402982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.403303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.403333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.403733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.404059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.404097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.404458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.404765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.404793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.405161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.405510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.405538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.405869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.406213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.406243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.907 [2024-10-13 17:44:01.406608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.406949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-10-13 17:44:01.406977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.907 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.407336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.407655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.407684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.408019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.408334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.408364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.408730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.409080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.409110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.409492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.409808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.409836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.410178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.410530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.410559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.410904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.411237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.411266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.411629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.411974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.412003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.412334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.412678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.412707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.413077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.413414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.413442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.413798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.414141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.414173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.414517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.414899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.414933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.415277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.415638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.415666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.416026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.416352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.416381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.416727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.417075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.417106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.417492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.417823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.417852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.418221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.418579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.418607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.418949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.419296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.908 [2024-10-13 17:44:01.419326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:52.908 qpair failed and we were unable to recover it. 00:33:52.908 [2024-10-13 17:44:01.419678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.180 [2024-10-13 17:44:01.419996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.180 [2024-10-13 17:44:01.420026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.180 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.420385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.420731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.420759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.421106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.421474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.421503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.421856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.422200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.422237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.422596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.422940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.422968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.423318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.423660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.423688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.424009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.424350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.424379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.424725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.425058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.425107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.425341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.425573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.425604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.425917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.426236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.426267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.426636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.426949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.426977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.427308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.427625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.427655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.427993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.428329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.428359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.428696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.428929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.428966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.429288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.429500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.429530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.429866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.430239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.430269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.430507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.430851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.430880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.431217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.431564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.431593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.431956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.432305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.432335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.432693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.433038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.433075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.433430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.433811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.433839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.434221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.434558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.434586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.434940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.435277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.435306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.435650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.435996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.436030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.436406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.436635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.436665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.181 qpair failed and we were unable to recover it. 00:33:53.181 [2024-10-13 17:44:01.437025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.181 [2024-10-13 17:44:01.437382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.437413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.437746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.438091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.438122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.438477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.438672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.438701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.439038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.439390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.439420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.439768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.440112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.440141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.440503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.440702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.440732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.441091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.441403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.441432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.441774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.442105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.442135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.442476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.442807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.442836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.443050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.443347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.443377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.443726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.444075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.444105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.444439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.444788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.444816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.445161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.445507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.445537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.445881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.446222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.446252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.446579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.446811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.446839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.447176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.447516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.447544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.447880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.448238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.448267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.448592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.448943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.448972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.449305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.449650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.449679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.450029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.450383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.450413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.450661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.450998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.451027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.451370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.451714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.451742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.452099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.452476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.452504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.452842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.453194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.453225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.453584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.453814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.453843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.454210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.454541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.454569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.182 qpair failed and we were unable to recover it. 00:33:53.182 [2024-10-13 17:44:01.454906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.182 [2024-10-13 17:44:01.455266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.455295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.455643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.455987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.456017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.456420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.456766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.456796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.457146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.457494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.457524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.457867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.458248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.458278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.458621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.458966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.458995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.459299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.459634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.459663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.460245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.460604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.460635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.460892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.461228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.461258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.461588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.461955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.461984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.462373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.462736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.462765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.463142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.463494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.463523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.463868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.464218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.464248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.464614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.464919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.464948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.465304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.465636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.465664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.466049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.466368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.466397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.466756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.467120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.467149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.467371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.467745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.467773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.468122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.468484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.468513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.468869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.469218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.469249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.469602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.469924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.469953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.470229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.470570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.470599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.470988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.471326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.471356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.471592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.471792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.471822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.472135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.472505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.472533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.472819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.473098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.473128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.473470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.473828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.473856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.183 qpair failed and we were unable to recover it. 00:33:53.183 [2024-10-13 17:44:01.474095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.183 [2024-10-13 17:44:01.474470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.474498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.474876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.475211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.475242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.475577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.475894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.475924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.476262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.476617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.476646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.477007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.477330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.477361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.477716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.478072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.478103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.478442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.478672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.478713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.479060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.479296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.479328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.479674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.480027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.480055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.480416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.480758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.480786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.481130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.481489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.481518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.481845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.482206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.482236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.482555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.482765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.482794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.483131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.483489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.483518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.483861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.484252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.484282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.484634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.484977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.485006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.485324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.485673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.485702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.486050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.486397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.486426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.486772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.487121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.487151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.487508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.487698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.487725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.488103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.488437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.488465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.488815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.489137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.489167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.489412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.489738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.489767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.490089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.490316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.490344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.490555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.490939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.490967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.491315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.491659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.491687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.492031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.492400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.492431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.184 qpair failed and we were unable to recover it. 00:33:53.184 [2024-10-13 17:44:01.492783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.184 [2024-10-13 17:44:01.493133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.493164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.493526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.493869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.493898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.494233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.494568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.494597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.494932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.495274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.495304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.495660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.495997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.496027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.496429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.496787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.496817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.497165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.497439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.497468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.497792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.498134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.498163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.498529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.498748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.498775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.499152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.499499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.499528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.499933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.500252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.500281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.500640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.500985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.501014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.501294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.501607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.501637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.502009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.502317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.502348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.502690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.503038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.503076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.503411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.503754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.503782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.504145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.504460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.504489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.504869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.505207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.505236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.505474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.505784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.505813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.506198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.506551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.506581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.506942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.507287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.507317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.507687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.508039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.508080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.508440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.508806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.508835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.185 qpair failed and we were unable to recover it. 00:33:53.185 [2024-10-13 17:44:01.509171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.185 [2024-10-13 17:44:01.509517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.509546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.509891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.510316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.510347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.510580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.510942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.510971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.511330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.511676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.511705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.511998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.512311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.512341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.512688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.513027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.513057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.513292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.513657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.513687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.514028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.514403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.514435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.514777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.515119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.515150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.515510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.515864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.515894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.516243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.516589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.516619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.516967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.517288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.517320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.517658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.518019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.518049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.518398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.518743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.518773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.519134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.519483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.519513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.519851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.520191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.520220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.520568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.520918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.520948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.521314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.521657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.521686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.522052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.522416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.522447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.522791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.523072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.523101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.523445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.523795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.523825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.524153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.524514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.524544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.524904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.525253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.525284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.525639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.525991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.526021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.526350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.526683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.526711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.527061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.527411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.527441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.527752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.528087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.528133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.186 qpair failed and we were unable to recover it. 00:33:53.186 [2024-10-13 17:44:01.528469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.186 [2024-10-13 17:44:01.528812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.528842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.529184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.529538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.529567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.529922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.530160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.530189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.530567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.530918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.530946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.531305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.531651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.531680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.532025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.532399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.532429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.532777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.533132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.533162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.533557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.533777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.533804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.534157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.534506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.534535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.534863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.535213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.535248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.535595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.535935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.535965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.536310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.536628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.536657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.536845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.537079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.537113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.537447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.537814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.537843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.538204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.538552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.538581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.538929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.539150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.539180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.539429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.539807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.539838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.540185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.540539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.540569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.540793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.541142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.541174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.541496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.541843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.541879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.542218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.542550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.542580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.542919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.543242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.543274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.543521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.543861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.543891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.544271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.544605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.544635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.545022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.545379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.545409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.545750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.546082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.546113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.546473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.546811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.187 [2024-10-13 17:44:01.546842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.187 qpair failed and we were unable to recover it. 00:33:53.187 [2024-10-13 17:44:01.547199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.547531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.547561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.547946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.548286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.548317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.548666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.549011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.549046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.549415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.549652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.549686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.550030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.550357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.550388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.550746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.551093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.551125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.551468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.551785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.551814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.552167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.552511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.552541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.552908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.553258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.553290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.553649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.553995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.554025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.554384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.554726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.554757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.554992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.555276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.555308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.555653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.555885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.555915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.556279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.556639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.556669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.557014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.557322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.557353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.557710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.558048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.558085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.558435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.558739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.558768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.559117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.559480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.559509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.559851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.560198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.560227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.560573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.560920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.560949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.561293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.561633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.561663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.562011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.562358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.562388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.562617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.562927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.562956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.563331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.563678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.563708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.564075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.564426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.564455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.564807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.565152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.188 [2024-10-13 17:44:01.565183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.188 qpair failed and we were unable to recover it. 00:33:53.188 [2024-10-13 17:44:01.565530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.565878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.565907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.566248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.566597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.566627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.566937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.567288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.567318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.567668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.568013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.568043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.568390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.568737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.568766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.569092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.569449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.569479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.569839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.570179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.570209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.570556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.570903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.570935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.571281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.571632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.571661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.571980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.572335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.572366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.572596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.572955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.572985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.573365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.573712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.573741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.574102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.574486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.574515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.574871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.575231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.575262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.575605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.575985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.576014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.576354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.576702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.576731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.577074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.577459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.577489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.577838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.578190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.578221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.578581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.578927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.578957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.579309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.579653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.579682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.579915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.580244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.580273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.580641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.580990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.581020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.581382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.581800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.581829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.582180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.582532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.582562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.582912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.583234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.583264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.583629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.583977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.584009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.189 [2024-10-13 17:44:01.584245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.584579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.189 [2024-10-13 17:44:01.584609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.189 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.584938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.585271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.585301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.585551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.585893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.585921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.586275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.586492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.586521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.586726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.587078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.587108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.587311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.587616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.587647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.587998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.588325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.588356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.588712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.589049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.589086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.589437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.589782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.589812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.590140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.590507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.590535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.590884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.591222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.591252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.591642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.591978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.592008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.592379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.592719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.592748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.593098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.593447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.593476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.593834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.594142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.594173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.594523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.594877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.594905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.595256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.595605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.595635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.595981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.596332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.596362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.596642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.596977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.597007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.597372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.597719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.597749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.598117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.598490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.598518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.598859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.599200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.599231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.599576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.599933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.599963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.600304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.600532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.600564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.190 qpair failed and we were unable to recover it. 00:33:53.190 [2024-10-13 17:44:01.600930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.190 [2024-10-13 17:44:01.601251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.601281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.601638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.601981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.602011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.602357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.602701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.602730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.603041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.603430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.603461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.603809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.604158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.604189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.604538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.604881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.604910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.605323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.605661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.605690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.606054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.606423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.606452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.606800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.607186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.607216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.607600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.607939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.607969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.608317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.608668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.608697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.609014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.609353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.609383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.609739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.610070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.610101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.610464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.610683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.610714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.611059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.611450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.611479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.611803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.612029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.612061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.612422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.612630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.612661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.613008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.613389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.613419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.613775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.614110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.614141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.614520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.614862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.614891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.615242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.615596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.615626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.615988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.616316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.616347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.616769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.617000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.617030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.617390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.617740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.617771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.618119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.618460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.618490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.618841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.619198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.619228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.191 [2024-10-13 17:44:01.619576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.619958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.191 [2024-10-13 17:44:01.619987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.191 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.620313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.620655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.620685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.621047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.621396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.621426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.621664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.622011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.622041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.622404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.622626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.622657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.622884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.623242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.623274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.623618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.623958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.623988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.624237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.624577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.624607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.624935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.625287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.625317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.625683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.626038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.626074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.626316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.626639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.626668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.627109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.627504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.627533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.627875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.628221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.628251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.628586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.628970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.629000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.629336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.629715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.629745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.629951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.630292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.630324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.630673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.631022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.631052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.631377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.631728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.631758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.632012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.632398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.632431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.632772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.633128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.633159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.633513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.633869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.633898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.634231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.634464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.634509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.634860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.635205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.635236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.635599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.635950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.635979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.636329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.636678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.636710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.637072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.637420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.637449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.637820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.638135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.638165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.192 qpair failed and we were unable to recover it. 00:33:53.192 [2024-10-13 17:44:01.638529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.638876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.192 [2024-10-13 17:44:01.638905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.639322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.639663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.639694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.640005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.640330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.640360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.640614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.640978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.641008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.641350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.641692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.641730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.642080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.642436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.642465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.642834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.643183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.643216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.643522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.643863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.643892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.644240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.644585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.644614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.644954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.645317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.645347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.645702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.646071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.646102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.646463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.646773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.646803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.647142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.647480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.647509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.647856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.648205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.648235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.648595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.648941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.648977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.649326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.649563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.649593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.649935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.650277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.650309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.650541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.650884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.650914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.651280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.651625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.651655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.651985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.652336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.652367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.652604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.652958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.652987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.653337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.653677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.653707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.654023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.654345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.654377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.654735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.655088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.655120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.655462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.655827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.655862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.656088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.656448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.656478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.656859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.657184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.657214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.193 qpair failed and we were unable to recover it. 00:33:53.193 [2024-10-13 17:44:01.657559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.193 [2024-10-13 17:44:01.657921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.657951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.658189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.658566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.658595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.658944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.659314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.659346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.659711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.660078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.660109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.660459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.660820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.660850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.661207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.661467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.661498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.661836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.662186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.662217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.662592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.662936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.662965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.663300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.663652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.663682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.664034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.664365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.664395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.664737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.665088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.665119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.665481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.665834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.665863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.666230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.666591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.666620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.666973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.667336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.667368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.667723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.668099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.668131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.668488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.668797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.668826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.669209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.669522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.669552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.669799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.670098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.670129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.670491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.670849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.670879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.671273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.671620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.671650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.671855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.672235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.672265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.672617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.672971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.673001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.673404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.673752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.673781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.674135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.674487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.674517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.674876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.675233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.675264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.675629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.675863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.675892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.676236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.676595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.676625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.194 qpair failed and we were unable to recover it. 00:33:53.194 [2024-10-13 17:44:01.676985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.194 [2024-10-13 17:44:01.677217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.677248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.677627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.677952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.677980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.678307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.678625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.678654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.679036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.679376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.679407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.679765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.680126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.680158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.680381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.680709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.680739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.681088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.681443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.681473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.681814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.682157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.682188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.682532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.682898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.682927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.683286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.683517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.683549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.683790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.684139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.684171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.684504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.684851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.684882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.685240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.685591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.685622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.685960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.686314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.686344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.686690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.687053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.687090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.687438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.687820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.687850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.688212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.688566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.688595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.688959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.689302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.689333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.689705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.690097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.690127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.690483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.690831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.690862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.691220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.691592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.691622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.691983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.692352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.692385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.692636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.692992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.693021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.195 qpair failed and we were unable to recover it. 00:33:53.195 [2024-10-13 17:44:01.693278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.693669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.195 [2024-10-13 17:44:01.693699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.196 qpair failed and we were unable to recover it. 00:33:53.196 [2024-10-13 17:44:01.694055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.196 [2024-10-13 17:44:01.694351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.196 [2024-10-13 17:44:01.694386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.196 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.694709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.695079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.695113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.695471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.695829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.695860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.696214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.696562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.696593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.696936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.697298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.697329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.697688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.698054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.698094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.698486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.698841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.698872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.699224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.699597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.699627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.699988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.700350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.466 [2024-10-13 17:44:01.700381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.466 qpair failed and we were unable to recover it. 00:33:53.466 [2024-10-13 17:44:01.700755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.701105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.701135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.701493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.701847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.701877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.702228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.702580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.702612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.702977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.703320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.703355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.703728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.704081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.704112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.704461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.704810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.704840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.705093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.705451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.705480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.705810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.706148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.706179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.706531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.706873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.706902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.707241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.707603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.707633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.707993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.708348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.708380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.708744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.708974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.709004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.709396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.709659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.709692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.709924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.710246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.710277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.710633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.710986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.711015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.711261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.711626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.711656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.712030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.712358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.712389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.712750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.713102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.713132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.713493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.713823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.713853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.714216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.714585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.714615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.714984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.715194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.715228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.715566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.715901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.715931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.716158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.716557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.716588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.716946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.717305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.717336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.717557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.717901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.717932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.718156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.718518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.718547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.467 qpair failed and we were unable to recover it. 00:33:53.467 [2024-10-13 17:44:01.718897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.719247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.467 [2024-10-13 17:44:01.719280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.719692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.720044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.720082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.720408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.720757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.720788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.721144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.721511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.721540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.721901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.722246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.722278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.722650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.722993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.723025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.723440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.723787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.723817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.724174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.724532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.724562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.724920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.725289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.725319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.725529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.725886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.725916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.726169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.726511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.726540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.726947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.727296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.727326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.727679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.728035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.728073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.728283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.728632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.728663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.729024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.729352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.729383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.729734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.730089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.730121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.730507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.730836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.730866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.731121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.731432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.731462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.731836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.732160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.732193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.732588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.732938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.732968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.733324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.733717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.733746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.734109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.734487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.734517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.734886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.735245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.735277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.735641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.735990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.736019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.736356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.736705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.736735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.737101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.737453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.737483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.737852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.738169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.738201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.468 qpair failed and we were unable to recover it. 00:33:53.468 [2024-10-13 17:44:01.738557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.468 [2024-10-13 17:44:01.738918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.738947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.739268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.739551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.739581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.739946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.740303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.740334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.740697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.741050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.741088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.741438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.741788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.741818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.742185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.742581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.742610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.742977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.743332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.743364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.743729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.744079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.744110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.744457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.744812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.744843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.745199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.745548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.745577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.745961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.746346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.746379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.746737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.747099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.747130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.747485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.747844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.747874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.748226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.748571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.748602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.748946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.749317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.749348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.749707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.750082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.750119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.750486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.750850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.750880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.751247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.751620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.751650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.751978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.752332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.752363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.752728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.753075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.753106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.753475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.753832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.753864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.754230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.754584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.754615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.754984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.755228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.755258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.755628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.755983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.756015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.756348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.756712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.756743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.757095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.757447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.757485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.469 [2024-10-13 17:44:01.757851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.758189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.469 [2024-10-13 17:44:01.758221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.469 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.758592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.758938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.758968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.759333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.759699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.759731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.760081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.760448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.760477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.760849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.761071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.761101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.761452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.761813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.761844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.762211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.762581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.762612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.762957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.763297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.763328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.763653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.763995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.764026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.764388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.764736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.764773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.765135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.765378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.765407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.765772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.766144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.766174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.766543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.766889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.766920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.767216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.767578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.767609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.767858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.770418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.770486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.770916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.771711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.771758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.772188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.772568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.772603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.772955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.773315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.773349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.773715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.774082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.774116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.774487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.774838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.774879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.775247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.775582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.775615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.775853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.776218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.776250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.776486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.776881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.776913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.777259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.777634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.777665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.778026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.778263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.778295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.778511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.778854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.778887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.779113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.779489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.779522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.470 qpair failed and we were unable to recover it. 00:33:53.470 [2024-10-13 17:44:01.779870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.470 [2024-10-13 17:44:01.780210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.780242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.780500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.780898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.780930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.781305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.781619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.781648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.782016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.782406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.782439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.782788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.783142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.783175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.783538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.783875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.783906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.784245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.784588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.784618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.784974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.785257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.785291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.785652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.786005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.786035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.786465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.786812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.786844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.787209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.787551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.787582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.787957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.788315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.788347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.791173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.791612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.791652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.792020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.792296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.792331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.792704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.793093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.793126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.793497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.793873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.793906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.794283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.794623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.794656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.795008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.795379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.795411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.795741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.796107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.796139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.796505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.796862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.796894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.797243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.797485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.797519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.797870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.798228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.798261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.471 [2024-10-13 17:44:01.798653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.799013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.471 [2024-10-13 17:44:01.799046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.471 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.799424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.799775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.799806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.800176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.801474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.801534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.801916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.802307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.802340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.802698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.803110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.803141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.803539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.803884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.803917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.804272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.804631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.804662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.805072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.805449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.805480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.805854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.806133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.806165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.806559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.806898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.806931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.807298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.807654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.807688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.808049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.808303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.808337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.808575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.808916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.808951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.809342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.809688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.809721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.809981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.810257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.810294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.810623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.810969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.811001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.811410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.811765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.811796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.812161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.812512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.812544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.812904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.813242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.813274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.813637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.813994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.814026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.814445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.814829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.814860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.815209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.815576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.815609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.815977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.816355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.816388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.816763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.817126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.817157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.817535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.817895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.817926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.818293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.818671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.818701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.819076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.819343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.819378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.472 qpair failed and we were unable to recover it. 00:33:53.472 [2024-10-13 17:44:01.819719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.472 [2024-10-13 17:44:01.820087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.820119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.820530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.820887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.820916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.821265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.821629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.821664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.821905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.822246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.822278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.822633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.822996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.823027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.823435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.823798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.823830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.824192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.824576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.824606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.824967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.825317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.825350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.825710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.826069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.826101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.826493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.826845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.826876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.827187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.827546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.827578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.827931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.828298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.828331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.828777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.829117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.829151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.829396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.829759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.829790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.830044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.830323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.830357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.830696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.830949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.830982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.831329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.831664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.831696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.832101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.832501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.832532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.832893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.833184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.833215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.833571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.833815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.833843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.834205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.834592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.834622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.834977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.835321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.835353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.835723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.836092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.836124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.836498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.836738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.836768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.837123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.837534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.837564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.837938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.838177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.838209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.473 [2024-10-13 17:44:01.838647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.838967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.473 [2024-10-13 17:44:01.838999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.473 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.839282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.839668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.839700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.840129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.840459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.840491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.840842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.841184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.841215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.841565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.841927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.841958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.842343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.842545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.842576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.842820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.843239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.843270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.843658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.844076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.844108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.844448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.844675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.844706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.845043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.845449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.845482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.845835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.846099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.846132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.846366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.846610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.846643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.846877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.847171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.847203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.847590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.847945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.847975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.848367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.848722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.848754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.849126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.849506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.849538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.849908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.850254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.850286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.850651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.851005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.851035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.851338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.851618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.851648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.852011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.852357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.852389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.852746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.853096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.853126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.853405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.853750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.853784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.854142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.854537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.854568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.854934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.855168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.855198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.855592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.855987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.856019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.856413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.856692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.856726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.857018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.857376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.857408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.474 qpair failed and we were unable to recover it. 00:33:53.474 [2024-10-13 17:44:01.857781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.858163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.474 [2024-10-13 17:44:01.858195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.858576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.858926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.858956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.859331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.859696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.859728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.860090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.860456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.860486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.860738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.861123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.861154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.861533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.861910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.861940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.862311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.862662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.862693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.863007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.863345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.863377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.863611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.863950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.863982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.864303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.864672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.864703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.865057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.865437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.865467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.865715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.866093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.866126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.866397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.866746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.866776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.867136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.867521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.867552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.867926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.868351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.868382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.868625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.868976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.869006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.869280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.869716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.869747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.870122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.870494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.870524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.870877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.871132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.871163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.871522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.871859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.871891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.872167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.872524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.872554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.872913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.873253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.873291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.873429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.873574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.873602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.873988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.874211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.874242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.874477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.874830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.874861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.875131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.875460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.875491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.875866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.876089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.876122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.475 qpair failed and we were unable to recover it. 00:33:53.475 [2024-10-13 17:44:01.876365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.475 [2024-10-13 17:44:01.876716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.876746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.877003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.877322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.877354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.877646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.878014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.878044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.878310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.878659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.878689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.879043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.879436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.879474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.879825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.880189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.880221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.880555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.880873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.880905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.881253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.881599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.881629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.881941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.882301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.882334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.882671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.882912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.882942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.883378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.883755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.883787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.884136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.884533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.884565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.884917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.885295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.885327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.885588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.885943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.885973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.886338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.886696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.886733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.887094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.887449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.887481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.887882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.888247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.888281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.888661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.889023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.889054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.889436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.889815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.889846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.890224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.890581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.890611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.890992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.891354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.891387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.891765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.892116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.892147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.892537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.892894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.892926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.476 [2024-10-13 17:44:01.893293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.893646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.476 [2024-10-13 17:44:01.893677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.476 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.894037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.894463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.894503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.894907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.895301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.895333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.895706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.896004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.896035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.896428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.896777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.896808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.897162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.897528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.897559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.897961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.898321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.898353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.898762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.899099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.899130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.899368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.899712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.899742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.900088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.900430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.900461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.900828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.901127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.901160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.901531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.901897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.901928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.902314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.902650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.902682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.903047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.903462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.903495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.903886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.904145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.904180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.904540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.904890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.904921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.905311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.905671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.905704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.906074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.906325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.906357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.906736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.907096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.907128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.907528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.907862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.907894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.908280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.908622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.908653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.909010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.909390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.909423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.909837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.910086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.910117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.910479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.910843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.910875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.911243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.911599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.911632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.911839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.912083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.912118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.912482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.912840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.912877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.477 qpair failed and we were unable to recover it. 00:33:53.477 [2024-10-13 17:44:01.913241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.477 [2024-10-13 17:44:01.913593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.913624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.913977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.914327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.914360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.914721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.915085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.915116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.915501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.915737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.915767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.916135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.916473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.916503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.916864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.917176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.917207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.917557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.917924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.917956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.918449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.918726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.918759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.918996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.919304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.919336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.919712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.919933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.919966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.920250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.920515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.920548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.920906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.921186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.921219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.921586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.921834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.921864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.922131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.922463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.922494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.922837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.923154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.923188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.923558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.923892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.923925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.924283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.924526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.924555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.924882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.925099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.925131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.925513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.929094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.929165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.929616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.930094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.930133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.930528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.930690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.930724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.931115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.931508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.931548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.931817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.932171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.932204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.932582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.932910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.932944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.933265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.933593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.933627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.934015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.934302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.934337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.934753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.935126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.478 [2024-10-13 17:44:01.935158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.478 qpair failed and we were unable to recover it. 00:33:53.478 [2024-10-13 17:44:01.935530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.935897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.935924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.936322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.936689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.936716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.937048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.937453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.937480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.937843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.938209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.938235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.938631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.939528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.939566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.939898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.940313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.940344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.940732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.941472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.941504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.941822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.942085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.942117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.942238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.942613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.942641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.943019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.943310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.943338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.943698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.944072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.944100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.944417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.944781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.944806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.945055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.945433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.945459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.945838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.946183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.946212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.946576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.946926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.946953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.947186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.947505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.947527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.947756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.948098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.948120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.948509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.948875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.948898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.949247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.949604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.949626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.949982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.950310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.950333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.950662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.951016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.951039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.951413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.951769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.951790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.952148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.952535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.952557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.952896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.953232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.953254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.953452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.953748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.953770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.953978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.954289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.954312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.479 qpair failed and we were unable to recover it. 00:33:53.479 [2024-10-13 17:44:01.954654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.954865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.479 [2024-10-13 17:44:01.954888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.955236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.955599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.955622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.955957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.956456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.956479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.956679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.957003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.957025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.957451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.957810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.957832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.958111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.958462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.958484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.958801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.959126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.959157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.959550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.959917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.959947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.960301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.960643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.960673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.961043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.961421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.961452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.961665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.962090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.962124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.962517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.962900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.962931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.963304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.963673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.963705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.964080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.964508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.964538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.964793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.965118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.965152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.965578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.965854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.965886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.966248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.966608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.966638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.966893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.967259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.967291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.967680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.967955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.967989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.968373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.968701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.968732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.969101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.969410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.969442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.969791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.970129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.970162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.970548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.970947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.970979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.971319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.971533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.971564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.971946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.972315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.972347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.972725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.973081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.973113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.973529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.973893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.973923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.480 qpair failed and we were unable to recover it. 00:33:53.480 [2024-10-13 17:44:01.974203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.480 [2024-10-13 17:44:01.974588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.974619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.974983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.975338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.975369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.975728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.976083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.976115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.976504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.976850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.976880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.977000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.977330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.977362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.977720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.978121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.978155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.978538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.978900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.978930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.979296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.979588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.979618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.980002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.980317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.980347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.980612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.980958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.481 [2024-10-13 17:44:01.980989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.481 qpair failed and we were unable to recover it. 00:33:53.481 [2024-10-13 17:44:01.981316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.981680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.981715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.982102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.982484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.982515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.982912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.983276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.983308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.983661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.984022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.984052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.984527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.984762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.984793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.985040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.985450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.985489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.985817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.986188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.986220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.986585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.986826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.986858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.987227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.987591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.987622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.987842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.987983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.988017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.988313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.988662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.988693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.989044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.989475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.989507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.989857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.990194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.990227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.990585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.990821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.990854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.991211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.991590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.991620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.779 qpair failed and we were unable to recover it. 00:33:53.779 [2024-10-13 17:44:01.991972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.992332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.779 [2024-10-13 17:44:01.992370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.992731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.993092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.993124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.993488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.993833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.993864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.994215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.994582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.994613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.994964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.995318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.995351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.995696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.996058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.996117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.996474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.996828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.996858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.997117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.997496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.997526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.997901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.998303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.998334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.998712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.999075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.999108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:01.999481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.999835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:01.999873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.000145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.000570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.000604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.000867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.001116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.001147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.001504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.001856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.001886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.002146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.002547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.002576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.002901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.003159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.003191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.003555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.003924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.003955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.004435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.004723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.004756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.005119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.005518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.005548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.005937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.006190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.006221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.780 qpair failed and we were unable to recover it. 00:33:53.780 [2024-10-13 17:44:02.006580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.780 [2024-10-13 17:44:02.006947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.006983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.007318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.007568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.007598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.007866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.008227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.008258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.008620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.008980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.009012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.009387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.009751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.009782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.010146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.010524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.010556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.010947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.011213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.011243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.011633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.011989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.012019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.012395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.012759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.012791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.013159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.013527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.013558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.013828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.014183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.014214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.014476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.014827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.014857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.015202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.015545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.015577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.781 qpair failed and we were unable to recover it. 00:33:53.781 [2024-10-13 17:44:02.015926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.016376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.781 [2024-10-13 17:44:02.016408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.016774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.017138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.017169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.017424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.017679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.017713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.018075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.018474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.018505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.018882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.019204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.019237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.019630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.019944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.019977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.020321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.020575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.020607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.020993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.021299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.021332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.021606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.021869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.021901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.022266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.022644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.022677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.783 qpair failed and we were unable to recover it. 00:33:53.783 [2024-10-13 17:44:02.023002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.783 [2024-10-13 17:44:02.023345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.023377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.023599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.023842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.023874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.024179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.024537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.024568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.024900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.025136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.025173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.025435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.025590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.025622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.025939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.026190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.026225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.026482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.026822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.026853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.026958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.027290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.027321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.027599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.027952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.027984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.028383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.028728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.028759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.029186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.029545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.029575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.029883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.030233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.030264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.030634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.030870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.030901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.784 qpair failed and we were unable to recover it. 00:33:53.784 [2024-10-13 17:44:02.031424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.784 [2024-10-13 17:44:02.031740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.031771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.032125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.032509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.032539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.032953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.033177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.033212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.033604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.033974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.034007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.034301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.034681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.034711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.035150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.035524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.035558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.035789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.036149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.036181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.036571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.036936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.036968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.037320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.037670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.037702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.038070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.038458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.038490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.038832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.039083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.039116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.039508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.039889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.039919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.785 [2024-10-13 17:44:02.040282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.040642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.785 [2024-10-13 17:44:02.040673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.785 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.040973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.041330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.041364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.041613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.041966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.041996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.042358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.042730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.042762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.043125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.043537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.043568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.043925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.044306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.044337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.044707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.045051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.045095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.045377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.045725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.045757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.046122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.046511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.046542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.046733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.047108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.047141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.047526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.047885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.786 [2024-10-13 17:44:02.047915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.786 qpair failed and we were unable to recover it. 00:33:53.786 [2024-10-13 17:44:02.048133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.048528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.048560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.048908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.049251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.049284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.049687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.050054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.050095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.050557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.050913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.050944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.051206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.051466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.051496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.051826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.052174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.052205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.052471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.052823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.052857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.053200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.053582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.053612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.053992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.054327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.054360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.054714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.054920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.054954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.055405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.055718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.055751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.056126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.056493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.056523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.056890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.057139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.057168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.057564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.057931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.057961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.787 [2024-10-13 17:44:02.058416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.058807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.787 [2024-10-13 17:44:02.058842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.787 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.059191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.059561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.059593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.059943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.060219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.060253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.060422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.060786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.060816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.061187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.061547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.061578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.061939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.062343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.062375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.062743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.063109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.063141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.063423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.063771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.063801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.064169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.064431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.064460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.064795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.065054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.065092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.065525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.065811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.065842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.066053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.066427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.066458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.066816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.067149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.067182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.067563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.067914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.067945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.068325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.068545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.068575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.068788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.069216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.788 [2024-10-13 17:44:02.069248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.788 qpair failed and we were unable to recover it. 00:33:53.788 [2024-10-13 17:44:02.069626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.069987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.070017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.070382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.070639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.070669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.071018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.071412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.071445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.071798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.072018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.072049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.072350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.072706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.072737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.073087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.073471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.073502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.073878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.074242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.074274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.074648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.075020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.789 [2024-10-13 17:44:02.075051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.789 qpair failed and we were unable to recover it. 00:33:53.789 [2024-10-13 17:44:02.075446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.075690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.075723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.076109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.076476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.076505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.076866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.077207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.077238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.077602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.077957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.077988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.078367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.078604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.078635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.079012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.079415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.079447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.079777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.080144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.080175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.080558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.080919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.080950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.081301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.081650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.081680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.082060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.082291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.082320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.082562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.082930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.082960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.083201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.083552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.083583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.083817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.084156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.084187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.084581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.084835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.084864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.790 [2024-10-13 17:44:02.085117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.085495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.790 [2024-10-13 17:44:02.085527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.790 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.085896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.086259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.086290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.086667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.087038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.087096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.087480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.087724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.087753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.088105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.088464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.088496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.088834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.089199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.089232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.089614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.089981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.090012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.090372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.090730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.090761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.091019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.091444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.091475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.091830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.092236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.092269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.791 [2024-10-13 17:44:02.092629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.092992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.791 [2024-10-13 17:44:02.093024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.791 qpair failed and we were unable to recover it. 00:33:53.792 [2024-10-13 17:44:02.093448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.093764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.093794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.792 qpair failed and we were unable to recover it. 00:33:53.792 [2024-10-13 17:44:02.094165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.094542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.094573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.792 qpair failed and we were unable to recover it. 00:33:53.792 [2024-10-13 17:44:02.094928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.095187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.095218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.792 qpair failed and we were unable to recover it. 00:33:53.792 [2024-10-13 17:44:02.095554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.095779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.095811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.792 qpair failed and we were unable to recover it. 00:33:53.792 [2024-10-13 17:44:02.096030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.096470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.792 [2024-10-13 17:44:02.096504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-10-13 17:44:02.096886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.097250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.097282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-10-13 17:44:02.097643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.097902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.097932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-10-13 17:44:02.098200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.098587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.098617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-10-13 17:44:02.098986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.099264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.099294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-10-13 17:44:02.099656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.100038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-10-13 17:44:02.100095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-10-13 17:44:02.100333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.100547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.100580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-10-13 17:44:02.100913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.101298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.101330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-10-13 17:44:02.101685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.102007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.102038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-10-13 17:44:02.102437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.102793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-10-13 17:44:02.102822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.795 [2024-10-13 17:44:02.103085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.103477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.103507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-10-13 17:44:02.103857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.104223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.104253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-10-13 17:44:02.104626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.105001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.105032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-10-13 17:44:02.105351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-10-13 17:44:02.105707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.105737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-10-13 17:44:02.106163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.106554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.106584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-10-13 17:44:02.106806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.107053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.107102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-10-13 17:44:02.107475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.107829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-10-13 17:44:02.107860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.797 [2024-10-13 17:44:02.108207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.108452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.108486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-10-13 17:44:02.108722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.108957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.108990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-10-13 17:44:02.109253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.109476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.109508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-10-13 17:44:02.109875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.110259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.110292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-10-13 17:44:02.110689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.111040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-10-13 17:44:02.111077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-10-13 17:44:02.111370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.111699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.111729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-10-13 17:44:02.112091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.112488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.112519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-10-13 17:44:02.112892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.113215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.113247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-10-13 17:44:02.113642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.114008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-10-13 17:44:02.114043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.114491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.114858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.114888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.115235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.115479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.115509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.115870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.116180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.116213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.116470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.116826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.116856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.117128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.117357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.117390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.117602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.118002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.118032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.118456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.118826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.118856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.119212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.119553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.119583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.119937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.120376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.120407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.120756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.121116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.121153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.121554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.121906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.121936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.122277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.122635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.122665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.123025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.123176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.123211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.123595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.123823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.123853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.124117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.124440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.124471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.124720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.125087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.125118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.125586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.125944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.125974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.126307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.126643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.126673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.126934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.127306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.127337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.127671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.128023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.128054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.128489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.128833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.128865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-10-13 17:44:02.129118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-10-13 17:44:02.129505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.129536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.129901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.130180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.130210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.130589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.130956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.130987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.131379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.131735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.131766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.132126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.132404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.132438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.132803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.133156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.133188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.133474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.133727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.133761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.133994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.134239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.134270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.134559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.134811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.134844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.135130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.135508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.135539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.135881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.136243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.136274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.136639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.137027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.137057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.137506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.137818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.137847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.138186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.138565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.138596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.138852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.139201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.139233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.139589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.139954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.139985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.140353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.140745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.140777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.141028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.141456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.141487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.141886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.142184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.142216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.142587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.142917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.142948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.143353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.143717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.143749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-10-13 17:44:02.144121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.144495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-10-13 17:44:02.144525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.144898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.145261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.145292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.145561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.145901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.145932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.146321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.146672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.146701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.146946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.147364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.147396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.147781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.148173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.148205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.148585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.148923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.148953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.149368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.149736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.149769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.150167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.150533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.150565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.150934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.151278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.151309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.151582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.151951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.151981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.152298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.152656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.152686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.153072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.153466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.153497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.153891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.154177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.154212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.154594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.154981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.155011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.155293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.155665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.155696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.156034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.156409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.156440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.156777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.157049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.157091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.157472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.157881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.157911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.158349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.158676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.158707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.159096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.159335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.159366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.159757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.160107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.160139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.160551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.160792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.160823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.161195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.161557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.161588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.161963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.162385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.162416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.162824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.163159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.163193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.163456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.163803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.163834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.164220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.164465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.164494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.164867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.165248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.165279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.165648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.166021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.166051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.166292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.166598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.166628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.166986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.167361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.167394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.167759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.168124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.168158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.168539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.168920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.168951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.169159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.169484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.169515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.169883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.170215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.170248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.170648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.171011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.171042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.171419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.171648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.171680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.172111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.172495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.172526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.172838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.173117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.173148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.173547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.173910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.173939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.174300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.174668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.174699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.175082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.175472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.175503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.175857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.176220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.176252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.176646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.176954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.176985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.177322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.177643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-10-13 17:44:02.177675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-10-13 17:44:02.178037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.178447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.178479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.178823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.179016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.179044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.179460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.179720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.179753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.180133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.180503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.180534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.180885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.181245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.181276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.181647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.182021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.182051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.182476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.182749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.182778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.183116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.183488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.183519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.183898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.184267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.184298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.184658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.185024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.185056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.185487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.185829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.185861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.186236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.186603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.186633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.186998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.187360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.187392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.187761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.188012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.188045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.188371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.188624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.188654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.189032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.189432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.189464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.189819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.190055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.190099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.190496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.190719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.190751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.191118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.191395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.191425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.191792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.192147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.192179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.192564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.192724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.192756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.193126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.193488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.193518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.193856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.194255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.194287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.194664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.195026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.195057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.195469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.195704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.195733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.196101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.196529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.196560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.196949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.197282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.197313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.197662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.198024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.198054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.198446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.198807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.198837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.199213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.199578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.199607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.199866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.200254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.200286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.200655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.201014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.201045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.201476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.201840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.201870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.202233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.202486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.202515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.202844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.203083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.203116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.203470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.203826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.203856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.204103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.204518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.204548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.204784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.205149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.205181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.205423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.205767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.205797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.206078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.206464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.206494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.206840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.207209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.207240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.207608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.207979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.208011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.208179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.208583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.208613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.208837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.209089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.209122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.209380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.209767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.209799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.210106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.210485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.210516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.210855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.211093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.211126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.211489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.211844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.211875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.212111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.212482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.212512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-10-13 17:44:02.212766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-10-13 17:44:02.213055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.213098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.213406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.213768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.213800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.214051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.214299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.214333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.214689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.215052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.215099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.215501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.215820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.215852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.216133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.216508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.216538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.216920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.217298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.217329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.217679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.218077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.218108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.218388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.218759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.218790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.219149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.219536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.219566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.219901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.220243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.220275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.220643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.221024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.221054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.221440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.221812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.221843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.222197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.222564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.222601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.222938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.223342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.223374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.223754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.224129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.224161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.224540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.224930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.224961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.225366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.225745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.225775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.226149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.226519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.226550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.226916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.227381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.227412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.227764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.228144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.228177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.228595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.228967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.228997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.229343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.229576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.229606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.229879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.230215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.230253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.230633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.230985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.231016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.231424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.231785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.231815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.231945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.232169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.232202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.232565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.232971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.233002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.233369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.233721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.233752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.233996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.234366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.234397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.234758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.235163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.235194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.235559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.235921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.235952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.236230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.236593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.236624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.236952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.237312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.237344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.237724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.238083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.238115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.238494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.238840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.238869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.239230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.239594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.239624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.240001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.240349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.240380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.240750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.241113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.241148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.241529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.241736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.241767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.242020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.242295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.242326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.242552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.242875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.242906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.243169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.243501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.243532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.243882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.244200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.244232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.244486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.244828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.244859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.245142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.245542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.245572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.245831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.246148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.246179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.246508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.246900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.246931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.247299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.247510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.247541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.247877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.248140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.248172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-10-13 17:44:02.248417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.248768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-10-13 17:44:02.248798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.249032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.249379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.249409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.249750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.250113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.250145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.250484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.250702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.250735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.250990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.251388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.251420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.251775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.252013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.252043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.252331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.252716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.252746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.253079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.253457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.253486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.253708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.253989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.254019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.254278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.254690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.254719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.255080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.255443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.255473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.255811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.256231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.256262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.256591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.256950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.256980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.257359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.257709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.257741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.258095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.258412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.258444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.258693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.258942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.258973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.259311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.259622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.259652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.259895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.260224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.260257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.260633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.260997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.261028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.261443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.261792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.261822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.262184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.262551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.262580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.262926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.263258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.263288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.263650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.263877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.263907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.264273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.264657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.264689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.265007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.265362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.265396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.265751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.266024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.266056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.266407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.266693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.266724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.267092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.267459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.267492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.267907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.268249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.268286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.268625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.268985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.269019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.269441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.269835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.269867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.270127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.270513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.270545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.270777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.271149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.271184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.271586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.271945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-10-13 17:44:02.271979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-10-13 17:44:02.272358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.272715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.272749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.272929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.273087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.273121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.273539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.273932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.273965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.274259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.274622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.274654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.275046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.275464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.275497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.275869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.276010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.276044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.276451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.276797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.276830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.277156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.277518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.277549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.277911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.278245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.278278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.278625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.279001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.279032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.279427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.279788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.279822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.280200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.280565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.280598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.280941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.281189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.281225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.281648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.281879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.281910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.282340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.282694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.282725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.283091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.283503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.283535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.074 qpair failed and we were unable to recover it. 00:33:54.074 [2024-10-13 17:44:02.283939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.074 [2024-10-13 17:44:02.284297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.284330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.284701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.285095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.285129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.285510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.285884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.285915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.286341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.286750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.286782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.287146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.287529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.287559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.287937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.288290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.288323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.288688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.289053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.289095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.289509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.289886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.289917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.290260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.290683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.290713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.291092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.291471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.291502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.291854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.292137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.292168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.292528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.292878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.292909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.293210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.293584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.293615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.293961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.294203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.294236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.294628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.294848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.294882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.295327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.295680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.295711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.295933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.296191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.296223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.296600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.296830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.296862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.297101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.297326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.297358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.297580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.297820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.297854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.298191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.298568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.298600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.298971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.299327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.299359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.299733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.300096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.300128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.300538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.300869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.300900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.301154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.301433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.301468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.301774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.302136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.302169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.075 qpair failed and we were unable to recover it. 00:33:54.075 [2024-10-13 17:44:02.302504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.302780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.075 [2024-10-13 17:44:02.302812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.303088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.303452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.303484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.303846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.304168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.304200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.304597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.304979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.305011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.305326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.305709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.305739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.305961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.306305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.306337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.306700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.307081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.307112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.307462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.307717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.307748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.308127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.308496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.308531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.308662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.308891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.308925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.309291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.309643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.309674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.310051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.310405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.310436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.310807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.311051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.311093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.311497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.311744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.311776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.312182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.312531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.312561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.312859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.313104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.313138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.313500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.313865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.313898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.314259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.314582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.314617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.314883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.315221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.315255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.315645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.315973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.316004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.316162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.316339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.316371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.316721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.317096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.317129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.317418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.317750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.317780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.318165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.318412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.318443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.318826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.319207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.319238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.319604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.319965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.319996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.320478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.320762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.320794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.076 qpair failed and we were unable to recover it. 00:33:54.076 [2024-10-13 17:44:02.321107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.076 [2024-10-13 17:44:02.321489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.321520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.321859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.322249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.322286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.322653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.323002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.323032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.323387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.323728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.323760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.324155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.324516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.324546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.324917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.325263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.325296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.325685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.326039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.326077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.326445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.326775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.326807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.327153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.327528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.327558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.327915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.328281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.328313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.328681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.329046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.329096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.329460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.329815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.329853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.330192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.330569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.330600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.330988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.331319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.331349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.331707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.332077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.332109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.332462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.332822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.332853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.333092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.333481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.333512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.333854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.334197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.334228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.334647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.334897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.334925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.335270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.335637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.335667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.336074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.336433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.336463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.336805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.337149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.337186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.337556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.337901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.337932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.338305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.338584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.338613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.339022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.339417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.339448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.339825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.340191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.340224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.340577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.340931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.077 [2024-10-13 17:44:02.340963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.077 qpair failed and we were unable to recover it. 00:33:54.077 [2024-10-13 17:44:02.341123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.341577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.341610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.341852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.342057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.342098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.342469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.342825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.342857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.343232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.343590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.343620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.343882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.344258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.344296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.344591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.344960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.344990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.345377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.345767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.345798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.346022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.346433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.346464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.346915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.347262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.347294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.347690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.348014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.348045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.348436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.348799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.348829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.349270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.349591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.349622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.349979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.350382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.350415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.350775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.350986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.351018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.351281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.351640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.351672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.352042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.352389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.352421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.352748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.353116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.353147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.353394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.353744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.353774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.354134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.354390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.354423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.354779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.355143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.355175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.355533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.355861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.355892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.356248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.356615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.356644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.357055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.357497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.357528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.357879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.358219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.358251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.358599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.358956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.358987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.359467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.359820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.359851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.078 qpair failed and we were unable to recover it. 00:33:54.078 [2024-10-13 17:44:02.360164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.078 [2024-10-13 17:44:02.360543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.360574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.360935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.361381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.361412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.361776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.362109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.362141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.362492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.362848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.362878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.363222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.363585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.363615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.363965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.364351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.364381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.364752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.365079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.365110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.365521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.365908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.365938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.366281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.366632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.366663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.367020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.367372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.367405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.367764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.368085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.368119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.368521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.368883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.368912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.369282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.369657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.369686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.369997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.370349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.370380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.370782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.371155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.371187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.371508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.371756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.371785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.372162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.372544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.372574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.372935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.373312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.373344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.373694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.373937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.373968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.374276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.374625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.374655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.375019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.375339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.375370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.375722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.376097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.079 [2024-10-13 17:44:02.376130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.079 qpair failed and we were unable to recover it. 00:33:54.079 [2024-10-13 17:44:02.376401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.376749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.376780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.377180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.377520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.377552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.377918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.378208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.378241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.378621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.378977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.379007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.379393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.379783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.379813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.380160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.380388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.380420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.380775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.381174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.381205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.381573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.381940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.381973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.382334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.382701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.382730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.383166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.383421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.383450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.383672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.383957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.383989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.384225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.384614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.384646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.385002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.385413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.385447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.385693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.386044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.386086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.386386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.386783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.386812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.387148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.387396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.387426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.387779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.388143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.388176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.388559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.388718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.388746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.389117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.389347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.389378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.389739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.389975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.390004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.390377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.390727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.390758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.391001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.391371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.391402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.391761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.392127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.392160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.392523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.392879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.392911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.393254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.393673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.393703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.394082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.394537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.394568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.080 qpair failed and we were unable to recover it. 00:33:54.080 [2024-10-13 17:44:02.394945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.395319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.080 [2024-10-13 17:44:02.395349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.395726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.396105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.396138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.396483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.396814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.396844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.397118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.397353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.397385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.397565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.397828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.397858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.398244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.398472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.398504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.398866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.399105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.399136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.399404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.399620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.399652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.399906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.400048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.400091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.400451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.400790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.400820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.401195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.401577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.401608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.401970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.402319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.402350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.402716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.403095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.403125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.403519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.403883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.403913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.404302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.404650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.404681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.405021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.405447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.405480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.405816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.406157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.406189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.406562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.406926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.406956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.407295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.407597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.407628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.408001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.408359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.408390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.408729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.409078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.409110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.409491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.409845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.409876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.410231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.410586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.410617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.410993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.411372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.411404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.411786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.412139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.412171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.412519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.412864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.412898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.413240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.413620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.413650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.081 qpair failed and we were unable to recover it. 00:33:54.081 [2024-10-13 17:44:02.414074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.081 [2024-10-13 17:44:02.414470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.414500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.414871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.415121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.415152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.415517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.415884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.415914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.416248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.416529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.416560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.416889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.417255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.417285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.417546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.417904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.417935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.418302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.418533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.418564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.418930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.419317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.419349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.419683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.420059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.420116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.420509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.420871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.420901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.421237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.421595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.421624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.421990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.422159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.422191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.422530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.422789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.422821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.423172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.423404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.423434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.423709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.424072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.424103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.424473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.424777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.424808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.425188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.425443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.425477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.425707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.426055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.426095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.426465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.426820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.426851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.427115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.427436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.427466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.427776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.428147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.428179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.428552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.428912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.428942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.429202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.429431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.429461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.429812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.430142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.430174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.430431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.430792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.430823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.431205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.431587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.431618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.082 [2024-10-13 17:44:02.431932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.432295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.082 [2024-10-13 17:44:02.432325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.082 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.432669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.432926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.432958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.433334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.433560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.433592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.433912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.434304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.434335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.434588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.434966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.434997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.435164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.435399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.435428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.435793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.436024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.436055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.436422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.436820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.436852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.437262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.437505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.437541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.437894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.438211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.438244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.438606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.438967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.439000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.439343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.439586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.439615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.439860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.440228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.440259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.440644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.441019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.441051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.441530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.441757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.441786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.442161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.442551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.442582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.442981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.443160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.443190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.443573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.443928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.443958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.444323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.444689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.444728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.444998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.445377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.445409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.445775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.446133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.446164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.446539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.446760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.446791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.447023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.447375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.447407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.447779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.448136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.448170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.448545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.448888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.448918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.449257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.449619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.449649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.450005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.450338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.450369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.083 qpair failed and we were unable to recover it. 00:33:54.083 [2024-10-13 17:44:02.450626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.450971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.083 [2024-10-13 17:44:02.451001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.451378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.451709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.451746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.452103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.452472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.452504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.452754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.453110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.453142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.453509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.453835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.453866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.454131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.454387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.454416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.454789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.455148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.455179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.455577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.455855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.455886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.456154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.456536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.456567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.456933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.457233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.457265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.457638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.458019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.458050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.458479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.458858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.458894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.459153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.459516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.459547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.459917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.460164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.460197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.460562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.460963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.460994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.461417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.461848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.461878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.462153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.462553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.462583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.462985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.463341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.463372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.463678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.464040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.464082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.464497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.464778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.464810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.465185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.465583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.465614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.466048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.466481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.466512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.466876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.467181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.467212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-10-13 17:44:02.467606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-10-13 17:44:02.467935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.467967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.468372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.468644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.468677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.468894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.469111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.469141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.469419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.469773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.469804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.470152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.470520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.470549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.470772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.470974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.471005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.471294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.471652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.471682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.472039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.472401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.472432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.472756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.472977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.473011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.473337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.473686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.473716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.474019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.474404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.474436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.474812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.475152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.475184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.475578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.475939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.475969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.476336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.476656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.476687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.477044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.477331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.477363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.477725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.478084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.478115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.478518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.478877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.478909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.479185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.479567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.479597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.479994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.480365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.480396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.480748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.481121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.481154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.481416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.481770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.481801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.482179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.482546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.482576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.482941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.483294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.483326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.483678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.484033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.484084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.484321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.484687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.484719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.485017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.485437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.485469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.485829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.486185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-10-13 17:44:02.486217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-10-13 17:44:02.486582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.486927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.486957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.487410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.487771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.487800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.488142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.488529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.488559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.488920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.489277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.489308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.489665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.490037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.490078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.490372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.490767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.490796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.491148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.491507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.491538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.491749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.492009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.492038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.492477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.492737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.492767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.493147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.493425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.493457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.493786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.494146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.494178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.494541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.494920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.494950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.495341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.495666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.495697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.496072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.496495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.496526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.496746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.497116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.497147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.497502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.497860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.497891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.498247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.498639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.498669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.498895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.499319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.499350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.499711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.499922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.499951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.500405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.500760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.500789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.501251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.501620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.501650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.502016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.502388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.502420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.502766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.503002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.503034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.503330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.503607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.503636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.503922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.504268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.504301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.504684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.505048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.505107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-10-13 17:44:02.505482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-10-13 17:44:02.505843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.505874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.506219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.506596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.506627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.507018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.507396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.507427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.507848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.508183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.508217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.508590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.508990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.509020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.509321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.509678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.509708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.510083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.510468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.510498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.510911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.511279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.511311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.511710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.512038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.512080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.512428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.512750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.512782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.513145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.513542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.513572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.513952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.514274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.514307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.514572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.514857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.514887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.515245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.515630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.515660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.516033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.516421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.516456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.516770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.517148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.517181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.517540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.517877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.517909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.518169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.518514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.518545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.518891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.519307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.519338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.519677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.520034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.520077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.520474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.520826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.520858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.521221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.521569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.521601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.521870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.522182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.522214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.522460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.522876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.522907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.523173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.523491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.523523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.523886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.524250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.524283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.524716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.525078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-10-13 17:44:02.525111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-10-13 17:44:02.525446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.525701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.525735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.526087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.526330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.526359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.526609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.526815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.526846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.527179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.527440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.527470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.527817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.528132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.528165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.528412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.528525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.528556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.528894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.529117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.529147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.529524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.529886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.529919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.530266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.530588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.530620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.531028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.531278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.531310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.531659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.532001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.532032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.532338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.532570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.532605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.532836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.533189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.533221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.533654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.533901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.533932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.534285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.534531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.534563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.534924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.535212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.535243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.535610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.535973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.536003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.536355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.536693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.536726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.537092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.537471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.537502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.537866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.538102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.538137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.538350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.538720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.538751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.539109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.539472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.539503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.539815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.540189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.540221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.540597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.540949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.540981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.541348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.541737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.541772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.542012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.542423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.542454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-10-13 17:44:02.542749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.543098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-10-13 17:44:02.543128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.543513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.543865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.543895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.544187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.544557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.544588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.544978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.545359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.545391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.545745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.545989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.546023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.546381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.546737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.546768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.547106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.547461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.547493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.547843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.548194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.548226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.548499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.548926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.548956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.549304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.549664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.549695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.550088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.550459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.550490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.550857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.551099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.551150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.551528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.551769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.551802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.552168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.552542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.552580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.552984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.553288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.553321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.553693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.554078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.554112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.554481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.554878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.554908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.555175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.555559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.555590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.555996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.556355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.556387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.556773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.557127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.557161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.557553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.557900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.557931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.558160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.558550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.558581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.558929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.559299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.559330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-10-13 17:44:02.559719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.559998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-10-13 17:44:02.560038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.560392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.560766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.560800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.561056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.561466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.561496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.561873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.562120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.562151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.562506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.562719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.562749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.562971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.563298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.563329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.563713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.564106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.564138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.564508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.564875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.564907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.565273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.565641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.565671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.565915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.566307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.566338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.566701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.566949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.566989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.567269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.567610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.567641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.568003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.568160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.568194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.568463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.568858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.568889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.569233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.569602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.569632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.570018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.570374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.570406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.570783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.571152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.571185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.571549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.571898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.571931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.572320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.572658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.572692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.573075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.573422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.573453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.573793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.574146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.574177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.574402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.574769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.574799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.575203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.575586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.575616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.575972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.576401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.576432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.576800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.577152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.577184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.577547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.577953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.577985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-10-13 17:44:02.578214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.578574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-10-13 17:44:02.578604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.578933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.579179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.579211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.579578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.579930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.579961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.580323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.580666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.580696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.581035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.581330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.581363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.581701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.582054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.582095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.582439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.582701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.582730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.583092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.583497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.583529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.583879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.584256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.584287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.584670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.584909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.584938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.585308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.585643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.585674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.586053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.586438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.586468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.586847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.587198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.587232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.587567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.587836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.587865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.588270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.588630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-10-13 17:44:02.588660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-10-13 17:44:02.589025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.360 [2024-10-13 17:44:02.589382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.589419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.589818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.590185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.590217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.590580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.590808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.590840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.591112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.591523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.591555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.591792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.592145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.592177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.592564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.592929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.592959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.593340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.593704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.593735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.594089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.594481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.594512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.594889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.595143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.595175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.595547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.595895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.595925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.596279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.596646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.596675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.597048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.597424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.597454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.597808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.598146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.598177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.598395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.598757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.598787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.599144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.599497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.599527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.599897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.600130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.600162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.600541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.600901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.600932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.601278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.601538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.601568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.601926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.602313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.602344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.602704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.603022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.603052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.603454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.603828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.603858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.604121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.604483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.604512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.604765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.605026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.605056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.605485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.605815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.605846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.606142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.606478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.606509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.606762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.607082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.361 [2024-10-13 17:44:02.607113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.361 qpair failed and we were unable to recover it. 00:33:54.361 [2024-10-13 17:44:02.607503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.607717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.607745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.607999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.608386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.608418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.608760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.608997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.609028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.609395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.609778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.609808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.610176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.610527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.610557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.610721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.610954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.610985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.611334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.611687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.611718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.611951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.612185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.612218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.612545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.612838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.612867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.613247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.613659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.613689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.613962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.614324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.614355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.614708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.615040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.615080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.615460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.615608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.615641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.615994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.616182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.616211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.616624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.616839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.616870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.617216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.617555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.617585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.617943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.618276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.618309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.618711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.618919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.618947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.619207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.619578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.619609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 [2024-10-13 17:44:02.620073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.620459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.362 [2024-10-13 17:44:02.620489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7158000b90 with addr=10.0.0.2, port=4420 00:33:54.362 qpair failed and we were unable to recover it. 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Read completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.362 Write completed with error (sct=0, sc=8) 00:33:54.362 starting I/O failed 00:33:54.363 Write completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 Read completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 Write completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 Write completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 Write completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 Write completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 Write completed with error (sct=0, sc=8) 00:33:54.363 starting I/O failed 00:33:54.363 [2024-10-13 17:44:02.620858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:54.363 [2024-10-13 17:44:02.621286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.621657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.621673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.621968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.622273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.622287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.622675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.622755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.622765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.623102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.623456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.623470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.623808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.624148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.624161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.624509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.624820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.624832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.625191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.625538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.625550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.625894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.626225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.626238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.626559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.626886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.626899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.627252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.627576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.627589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.627920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.628130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.628143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.628482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.628798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.628811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.629159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.629513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.629526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.629713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.630075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.630089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.630500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.630864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.630876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.631220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.631550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.631563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.631898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.632227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.632240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.632588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.632950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.632963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.633271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.633587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.633601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.633673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.633999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.634014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.634227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.634590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.634603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.634992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.635358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.635371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.635586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.635945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.635958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.636296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.636630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.636642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.363 qpair failed and we were unable to recover it. 00:33:54.363 [2024-10-13 17:44:02.636999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.363 [2024-10-13 17:44:02.637346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.637360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.637704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.638072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.638088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.638432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.638756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.638769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.639120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.639455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.639468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.639847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.640136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.640148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.640475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.640799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.640814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.641220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.641579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.641591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.641918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.642236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.642249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.642601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.642865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.642877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.643074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.643433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.643446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.643780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.644105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.644118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.644468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.644792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.644804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.645157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.645470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.645483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.645811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.646137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.646150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.646349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.646687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.646700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.647044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.647372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.647384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.647731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.648056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.648076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.648418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.648737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.648749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.649089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.649444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.649457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.649795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.650137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.650150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.650473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.650796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.650808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.651124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.651453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.651466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.651792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.652124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.652137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.652510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.652867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.652881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.653198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.653523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.653535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.653850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.654047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.654060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.654426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.654743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.364 [2024-10-13 17:44:02.654757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.364 qpair failed and we were unable to recover it. 00:33:54.364 [2024-10-13 17:44:02.655096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.655489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.655502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.655847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.656114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.656128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.656466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.656794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.656807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.657146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.657496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.657509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.657858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.658166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.658179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.658522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.658878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.658891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.659212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.659420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.659432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.659768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.660091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.660104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.660420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.660746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.660758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.661097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.661454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.661466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.661807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.662131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.662144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.662489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.662787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.662800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.663005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.663317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.663329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.663675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.663997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.664010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.664339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.664661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.664675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.665016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.665366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.665381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.665699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.666053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.666073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.666388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.666715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.666729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.667058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.667420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.667433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.667743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.668077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.668089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.668402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.668745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.668754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.669054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.669429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.669439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.669761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.670083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.670093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.365 qpair failed and we were unable to recover it. 00:33:54.365 [2024-10-13 17:44:02.670441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.670749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.365 [2024-10-13 17:44:02.670760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.671133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.671456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.671466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.671754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.672077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.672088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.672424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.672780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.672795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.673022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.673357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.673372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.673691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.674010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.674024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.674382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.674737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.674755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.675085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.675440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.675454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.675786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.676109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.676125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.676444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.676779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.676794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.677098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.677412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.677428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.677756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.678075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.678089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.678289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.678633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.678647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.678955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.679277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.679292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.679644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.679967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.679980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.680344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.680633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.680649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.680968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.681275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.681292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.681603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.681924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.681939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.682281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.682584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.682599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.682935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.683234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.683250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.683590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.683911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.683926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.684239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.684571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.684586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.684937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.685235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.685250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.685588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.685882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.366 [2024-10-13 17:44:02.685897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.366 qpair failed and we were unable to recover it. 00:33:54.366 [2024-10-13 17:44:02.686222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.686544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.686558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.686904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.687257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.687271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.687613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.687931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.687946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.688271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.688582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.688595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.688912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.689231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.689244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.689557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.689878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.689892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.690198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.690533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.690545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.690870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.691176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.691189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.691503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.691826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.691839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.692149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.692482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.692494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.692846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.693168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.693183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.693507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.693830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.693844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.694161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.694478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.694492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.694826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.695146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.695162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.695505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.695845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.695857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.696198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.696522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.696534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.696871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.697193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.697206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.697539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.697862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.697874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.698212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.698535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.698548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.698857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.699175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.699189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.699506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.699857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.699871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.700131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.700471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.700484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.700860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.701177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.701193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.701387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.701727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.701742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.701909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.702222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.702236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.702417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.702765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.702778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.703122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.703299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.367 [2024-10-13 17:44:02.703312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-10-13 17:44:02.703619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.703954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.703966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.704276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.704592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.704604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.704944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.705231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.705244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.705597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.705888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.705902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.706212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.706527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.706539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.706875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.707194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.707206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.707544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.707868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.707883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.708204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.708534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.708546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.708941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.709247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.709259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.709577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.709896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.709910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.710248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.710542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.710555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.710852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.711175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.711188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.711420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.711729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.711741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.712049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.712238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.712252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.712599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.712862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.712875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.713172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.713496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.713509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.713830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.714147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.714164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.714365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.714555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.714568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.714876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.715196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.715211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.715523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.715884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.715896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.716218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.716540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.716551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.716887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.717204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.717216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.717550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.717873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.717885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.718183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.718496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.718509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.718845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.719167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.719180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.719477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.719827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.719840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-10-13 17:44:02.720180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.368 [2024-10-13 17:44:02.720505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.720517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.720857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.721180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.721192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.721530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.721846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.721858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.722278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.722624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.722636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.722952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.723144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.723157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.723489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.723813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.723824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.724133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.724447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.724459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.724806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.725113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.725126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.725456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.725779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.725791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.726135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.726462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.726475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.726775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.727162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.727175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.727418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.727749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.727763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.728100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.728423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.728434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.728776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.729099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.729111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.729430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.729778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.729790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.730108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.730519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.730532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.730847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.731166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.731179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.731499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.731810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.731822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.732134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.732450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.732462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.732776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.733097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.733109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.733422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.733742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.733754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.734054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.734408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.734422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.734737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.734930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.734943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.735240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.735572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.735584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.735889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.736219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.736232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.736527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.736847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.736859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.737193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.737510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.737522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-10-13 17:44:02.737867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.738171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.369 [2024-10-13 17:44:02.738183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.738516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.738691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.738705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.738886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.739195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.739209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.739546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.739864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.739878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.740201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.740539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.740551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.740859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.741193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.741206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.741525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.741884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.741896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.742231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.742549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.742561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.742872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.743192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.743205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.743399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.743745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.743757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.744099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.744443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.744455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.744768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.745068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.745081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.745389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.745741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.745752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.746055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.746377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.746389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.746732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.747056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.747077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.747263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.747451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.747464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.747774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.747944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.747956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.748270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.748592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.748605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.748945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.749264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.749276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.749610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.749918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.749929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.750238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.750584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.750595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.750900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.751217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.751229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.751616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.751874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.751885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.752194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.752514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.752527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.752834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.753120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.370 [2024-10-13 17:44:02.753132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.370 qpair failed and we were unable to recover it. 00:33:54.370 [2024-10-13 17:44:02.753312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.753633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.753646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.753941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.754129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.754141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.754468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.754786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.754799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.755109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.755464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.755477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.755791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.756129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.756142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.756464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.756777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.756789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.757094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.757423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.757435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.757747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.758072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.758086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.758418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.758733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.758745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.759032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.759347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.759359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.759676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.759992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.760003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.760338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.760690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.760703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.761029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.761349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.761361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.761654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.761997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.762009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.762327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.762644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.762656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.762973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.763274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.763287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.763590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.763937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.763949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.764284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.764600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.764612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.764920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.765242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.765255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.765586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.765938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.765951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.766290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.766640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.766652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.766985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.767305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.767319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.767629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.767946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.767958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.768292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.768602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.768615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.768876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.769191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.769204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.769545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.769868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.371 [2024-10-13 17:44:02.769881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.371 qpair failed and we were unable to recover it. 00:33:54.371 [2024-10-13 17:44:02.770233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.770453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.770465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.770777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.771109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.771122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.771433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.771759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.771771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.772123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.772438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.772450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.772669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.772960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.772974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.773165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.773523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.773534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.773847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.774167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.774180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.774510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.774828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.774839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.775142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.775461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.775473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.775819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.776079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.776092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.776419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.776736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.776748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.777090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.777429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.777441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.777778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.778092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.778108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.778305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.778624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.778636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.778956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.779263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.779278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.779616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.779935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.779947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.780267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.780612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.780624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.780934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.781200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.781213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.781385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.781700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.781713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.782052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.782370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.782382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.782716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.783051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.783072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.783284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.783582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.783596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.783904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.784230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.784245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.784549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.784866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.784878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.785186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.785505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.785518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.785823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.786124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.786137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.372 [2024-10-13 17:44:02.786493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.786807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.372 [2024-10-13 17:44:02.786819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.372 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.787164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.787476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.787489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.787785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.788103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.788116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.788438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.788739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.788751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.789056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.789370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.789382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.789726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.790043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.790056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.790394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.790707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.790720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.791050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.791371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.791384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.791713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.792072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.792086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.792492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.792838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.792850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.793152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.793481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.793492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.793833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.794138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.794151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.794445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.794755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.794766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.795087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.795381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.795394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.795576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.795897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.795909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.796208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.796557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.796569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.796886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.797178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.797190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.797510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.797831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.797843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.798144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.798472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.798484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.798796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.799110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.799122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.799305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.799597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.799609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.799895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.800208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.800219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.800522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.800836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.800846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.801153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.801308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.801320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.801597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.801924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.801935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.802261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.802589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.802600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.802872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.803212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.803223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.373 [2024-10-13 17:44:02.803593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.803889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.373 [2024-10-13 17:44:02.803901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.373 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.804215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.804539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.804550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.804886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.805200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.805212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.805523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.805874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.805885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.806185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.806500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.806512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.806816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.807132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.807144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.807429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.807784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.807795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.808098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.808443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.808454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.808802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.809099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.809112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.809422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.809742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.809753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.809953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.810265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.810277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.810597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.810915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.810927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.811233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.811549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.811565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.811890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.812219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.812231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.812571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.812916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.812927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.813239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.813552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.813563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.813869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.814173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.814185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.814510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.814828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.814840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.815175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.815491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.815503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.815805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.816131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.816142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.816453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.816776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.816787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.817091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.817419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.817430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.817704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.818046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.818059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.818368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.818713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.818725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.819037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.819256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.819267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.819600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.819918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.819931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.820156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.820472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.820484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.374 [2024-10-13 17:44:02.820796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.821114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.374 [2024-10-13 17:44:02.821126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.374 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.821438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.821751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.821762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.822075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.822373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.822384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.822667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.822983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.822994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.823176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.823386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.823398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.823583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.823915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.823925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.824241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.824560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.824571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.824854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.825172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.825183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.825485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.825799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.825809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.826117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.826436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.826447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.826754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.827070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.827081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.827385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.827698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.827709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.828004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.828297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.828308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.828472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.828772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.828784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.829079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.829405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.829416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.829705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.830005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.830016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.830321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.830640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.830651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.830952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.831241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.831252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.831562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.831836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.831847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.832165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.832465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.832476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.832779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.833097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.833109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.833416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.833749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.833760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.834071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.834388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.834399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.834756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.835010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.835021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.375 [2024-10-13 17:44:02.835339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.835666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.375 [2024-10-13 17:44:02.835677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.375 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.835982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.836256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.836268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.836576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.836923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.836934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.837237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.837552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.837567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.837883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.838225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.838236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.838560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.838879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.838890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.839184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.839392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.839403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.839696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.840044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.840055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.840262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.840601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.840613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.840774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.840971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.840983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.841284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.841574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.841586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.841909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.842228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.842240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.842610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.842908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.842919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.843229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.843546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.843557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.843869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.844190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.844202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.844564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.844861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.844872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.845256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.845553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.845564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.845872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.846184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.846195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.846503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.846821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.846831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.847210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.847524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.847535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.847839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.848165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.848176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.848500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.848839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.848850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.849161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.849480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.849493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.849674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.849988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.849999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.850312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.850632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.850643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.850977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.851293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.851304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.851609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.851879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.851890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.376 qpair failed and we were unable to recover it. 00:33:54.376 [2024-10-13 17:44:02.852207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.852535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.376 [2024-10-13 17:44:02.852545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.852852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.853143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.853154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.853467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.853808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.853819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.854126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.854445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.854456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.854737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.855049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.855060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.855394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.855696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.855708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.856073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.856399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.856411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.856715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.857030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.857040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.857371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.857687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.857699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.858001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.858312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.858323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.858622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.858936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.858947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.859244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.859561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.859572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.859852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.860172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.860183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.860488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.860803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.860814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.861081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.861410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.861421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.861726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.862054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.862068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.862399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.862707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.862718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.863025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.863343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.863354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.863664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.863978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.863989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.864297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.864611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.864622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.864906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.865217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.865229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.865533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.865845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.865856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.866209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.866552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.866563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.866893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.867233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.867245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.867462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.867767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.867779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.868085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.868277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.868288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.868601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.868896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.868907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.377 qpair failed and we were unable to recover it. 00:33:54.377 [2024-10-13 17:44:02.869263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.377 [2024-10-13 17:44:02.869561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.869572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.869868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.870182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.870194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.870374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.870687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.870698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.871003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.871321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.871332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.871624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.871939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.871950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.872115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.872466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.872477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.872754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.873080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.873092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.873386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.873585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.873596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.873900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.874228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.874239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.874574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.874888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.874900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.875106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.875394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.875405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-10-13 17:44:02.875710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.876037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-10-13 17:44:02.876048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.876362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.876683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.876696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.876983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.877300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.877312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.877458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.877750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.877762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.878070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.878378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.878389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.878693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.879006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.879017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.879308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.879628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.879639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.879933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.880250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.880260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.880567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.880910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.880924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.881229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.881542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.881553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.881886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.882175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.882187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.882495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.882806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.882819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.883124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.883437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.883448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.883753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.884070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.884081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.884362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.884672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.884682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.884984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.885174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.885185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.885505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.885817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.885828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.886131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.886441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.886452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.886740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.887055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.887070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.887377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.887672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.887683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-10-13 17:44:02.887987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.888271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.648 [2024-10-13 17:44:02.888282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.888591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.888903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.888914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.889197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.889509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.889521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.889823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.890137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.890149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.890441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.890754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.890765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.891067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.891378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.891389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.891668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.891983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.891994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.892329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.892645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.892656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.892961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.893301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.893313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.893622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.893935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.893945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.894238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.894557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.894568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.894877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.895191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.895202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.895574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.895916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.895927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.896237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.896556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.896567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.896848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.897135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.897147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.897457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.897785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.897797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.898101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.898433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.898443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.898753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.899073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.899084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.899389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.899728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.899739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.899924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.900108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.900119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.900293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.900518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.900529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.900775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.901083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.901094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.901452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.901750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.901762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.902052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.902384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.902395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.902705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.903017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.903027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.903329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.903628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.903639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.903977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.904278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.904289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-10-13 17:44:02.904587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.649 [2024-10-13 17:44:02.904937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.904949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.905253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.905419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.905430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.905742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.906039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.906050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.906388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.906714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.906725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.906991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.907185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.907197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.907518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.907831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.907843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.908142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.908459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.908470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.908661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.908931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.908942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.909251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.909569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.909579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.909886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.910199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.910210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.910514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.910831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.910842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.911031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.911351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.911362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.911668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.911981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.911994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.912333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.912630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.912640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.912944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.913264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.913275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.913465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.913758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.913769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.914076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.914351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.914361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.914673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.914986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.914997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.915200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.915517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.915529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.915808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.916118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.916129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.916434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.916749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.916760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.917068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.917351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.917362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.917675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.917991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.918003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.918330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.918641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.918652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.918959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.919309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.919319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.919618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.919931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.919941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.920236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.920562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.920572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-10-13 17:44:02.920851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.650 [2024-10-13 17:44:02.921131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.921142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.921460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.921779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.921790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.922095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.922437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.922447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.922751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.923070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.923081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.923409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.923727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.923737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.923911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.924177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.924187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.924452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.924823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.924835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.925161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.925487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.925498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.925823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.926126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.926137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.926447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.926760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.926770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.926961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.927250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.927261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.927570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.927895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.927906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.928230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.928542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.928552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.928857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.929119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.929130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.929451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.929749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.929760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.930069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.930298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.930308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.930496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.930790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.930801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.931106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.931419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.931430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.931613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.931874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.931884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.932190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.932393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.932405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.932733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.933070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.933081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.933390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.933668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.933680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.933868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.934148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.934159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.934470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.934782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.934792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.935075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.935389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.935400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.935711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.936023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.936034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.936336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.936646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.651 [2024-10-13 17:44:02.936657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.651 qpair failed and we were unable to recover it. 00:33:54.651 [2024-10-13 17:44:02.936958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.937251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.937261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.937542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.937840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.937850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.938159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.938469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.938480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.938782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.939096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.939107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.939412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.939596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.939607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.939929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.940121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.940133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.940442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.940770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.940781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.941084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.941392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.941403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.941709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.941989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.942000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.942277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.942501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.942512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.942817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.943042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.943052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.943362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.943678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.943688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.943991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.944302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.944313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.944595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.944876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.944887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.945192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.945522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.945532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.945834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.946150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.946161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.946468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.946785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.946795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.947098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.947432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.947443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.947749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.948067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.948081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.948391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.948729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.948741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.949048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.949378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.949389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.949676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.950002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.950012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.950314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.950630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.950640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.950944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.951282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.951293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.951609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.951949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.951959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.952281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.952602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.952613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.652 [2024-10-13 17:44:02.952914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.953231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.652 [2024-10-13 17:44:02.953242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.652 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.953547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.953858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.953869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.954153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.954509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.954519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.954812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.955135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.955146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.955447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.955738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.955749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.956043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.956303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.956316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.956608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.956904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.956916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.957281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.957590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.957602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.957899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.958220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.958233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.958542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.958827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.958839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.959126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.959434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.959444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.959731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.960042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.960053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.960263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.960580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.960590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.960859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.961054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.961067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.961428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.961766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.961777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.962076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.962360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.962370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.962676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.962903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.962914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.963232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.963533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.963544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.963850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.964162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.964174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.964468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.964783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.964794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.965140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.965452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.965463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.965771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.966072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.966083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.966393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.966707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.966717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.653 [2024-10-13 17:44:02.967043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.967409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.653 [2024-10-13 17:44:02.967421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.653 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.967732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.968049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.968060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.968440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.968755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.968766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.969037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.969311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.969322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.969609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.969924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.969935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.970253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.970577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.970588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.970787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.970975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.970986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.971295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.971612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.971624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.971930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.972117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.972128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.972444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.972774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.972784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.973115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.973456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.973467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.973773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.974110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.974121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.974408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.974722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.974733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.974959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.975269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.975281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.975586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.975906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.975916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.976229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.976406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.976418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.976622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.976854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.976865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.977174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.977498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.977509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.977813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.978134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.978145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.978454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.978792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.978802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.979122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.979416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.979427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.979694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.979991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.980005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.980368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.980685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.980696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.980998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.981306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.981317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.981646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.981837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.981849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.982176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.982519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.982530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.982803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.983115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.983125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.654 qpair failed and we were unable to recover it. 00:33:54.654 [2024-10-13 17:44:02.983447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.983756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.654 [2024-10-13 17:44:02.983767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.983999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.984287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.984298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.984603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.984952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.984962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.985240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.985577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.985587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.985890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.986259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.986269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.986562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.986873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.986883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.987193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.987504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.987515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.987816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.987989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.988001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.988270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.988608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.988619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.988819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.989104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.989115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.989440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.989693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.989704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.990012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.990258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.990269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.990578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.990751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.990763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.991128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.991443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.991455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.991775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.992091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.992102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.992425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.992732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.992743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.993054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.993408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.993419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.993695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.994010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.994020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.994231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.994551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.994561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.994870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.995175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.995187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.995513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.995840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.995850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.996081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.996362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.996372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.996676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.996866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.996878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.997212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.997556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.997566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.997861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.998175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.998186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.998531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.998878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.998889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.999200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.999516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:02.999526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.655 qpair failed and we were unable to recover it. 00:33:54.655 [2024-10-13 17:44:02.999793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.655 [2024-10-13 17:44:03.000111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.000122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.000422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.000713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.000723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.001002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.001224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.001235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.001550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.001867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.001878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.002194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.002507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.002518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.002829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.003153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.003164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.003502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.003803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.003814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.004115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.004451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.004462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.004768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.005082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.005094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.005416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.005731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.005742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.006075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.006276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.006286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.006592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.006903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.006913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.007113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.007435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.007446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.007752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.008067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.008078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.008391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.008729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.008740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.009045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.009382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.009393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.009696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.009990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.010001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.010332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.010643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.010654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.010938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.011275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.011288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.011613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.011925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.011936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.012236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.012415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.012427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.012725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.013035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.013045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.013347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.013657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.013668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.013969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.014277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.014288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.014588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.014923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.014934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.015244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.015567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.015579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.656 [2024-10-13 17:44:03.015867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.016181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.656 [2024-10-13 17:44:03.016192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.656 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.016492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.016811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.016821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.016984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.017213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.017224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.017385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.017700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.017710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.018029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.018306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.018318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.018628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.018939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.018950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.019332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.019555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.019566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.019922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.020230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.020242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.020575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.020853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.020865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.021169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.021515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.021526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.021823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.022130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.022141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.022455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.022768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.022778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.023111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.023430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.023441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.023810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.024106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.024117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.024427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.024771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.024781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.025086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.025431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.025442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.025758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.026054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.026070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.026400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.026718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.026729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.027032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.027223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.027235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.027542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.027836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.027847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.028139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.028449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.028460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.028735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.028943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.028953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.029242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.029543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.029555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.029931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.030154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.030166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.030480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.030814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.030825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.031123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.031408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.031420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.031730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.032010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.032022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.657 qpair failed and we were unable to recover it. 00:33:54.657 [2024-10-13 17:44:03.032334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.657 [2024-10-13 17:44:03.032671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.032683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.033015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.033328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.033339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.033636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.033949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.033960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.034257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.034594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.034604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.034907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.035217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.035228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.035511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.035730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.035741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.036047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.036361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.036372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.036673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.036932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.036943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.037255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.037589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.037600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.037925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.038231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.038244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.038550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.038864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.038876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.039184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.039529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.039539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.039846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.040160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.040171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.040503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.040815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.040825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.041165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.041427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.041438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.041741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.042058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.042074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.042400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.042710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.042723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.043053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.043402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.043413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.043720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.044030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.044041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.044345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.044650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.044660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.044934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.045261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.045272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.045602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.045800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.045812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.046127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.046448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.046459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.658 [2024-10-13 17:44:03.046762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.047047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.658 [2024-10-13 17:44:03.047058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.658 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.047339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.047663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.047674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.047971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.048310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.048322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.048626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.048935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.048947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.049223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.049558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.049569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.049879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.050194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.050206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.050539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.050865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.050876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.051179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.051499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.051509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.051851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.052195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.052206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.052512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.052827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.052838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.053114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.053423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.053433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.053737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.054048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.054058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.054347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.054663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.054674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.054982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.055297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.055308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.055645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.055981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.055991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.056299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.056489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.056500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.056756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.056942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.056954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.057224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.057548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.057559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.057841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.058126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.058138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.058453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.058750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.058762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.058946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.059269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.059281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.059586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.059890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.059901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.060178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.060490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.060501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.060697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.061007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.061018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.061354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.061669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.061680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.061999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.062293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.062304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.062633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.062942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.062953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.659 qpair failed and we were unable to recover it. 00:33:54.659 [2024-10-13 17:44:03.063273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.063588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.659 [2024-10-13 17:44:03.063600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.063903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.064200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.064211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.064405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.064714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.064725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.065002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.065313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.065324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.065628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.065944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.065956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.066167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.066352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.066364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.066707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.067034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.067045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.067389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.067700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.067711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.067998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.068178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.068190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.068526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.068825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.068836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.069034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.069348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.069360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.069694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.070004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.070015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.070324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.070635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.070646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.070944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.071321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.071332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.071647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.071964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.071975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.072277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.072594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.072605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.072909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.073219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.073230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.073532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.073850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.073861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.074175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.074332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.074343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.074640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.074950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.074960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.075284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.075598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.075609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.075891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.076088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.076100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.076405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.076717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.076727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.077006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.077265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.077277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.077629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.077942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.077953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.078321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.078630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.078641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.078931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.079259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.079271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.660 qpair failed and we were unable to recover it. 00:33:54.660 [2024-10-13 17:44:03.079602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.660 [2024-10-13 17:44:03.079919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.079934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.080281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.080619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.080631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.080936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.081248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.081258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.081563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.081868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.081878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.082150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.082439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.082450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.082637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.082966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.082976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.083277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.083611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.083621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.083930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.084243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.084254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.084542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.084833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.084844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.085144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.085441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.085452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.085755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.086069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.086080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.086387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.086577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.086587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.086787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.087045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.087055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.087362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.087682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.087693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.087998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.088315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.088325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.088633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.088944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.088955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.089230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.089551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.089563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.089865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.090177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.090188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.090491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.090803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.090813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.091116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.091410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.091421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.091706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.092017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.092028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.092332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.092634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.092645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.092947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.093250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.093261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.093566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.093841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.093852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.094181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.094483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.094494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.094796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.095105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.095116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.095410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.095727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.095737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.661 qpair failed and we were unable to recover it. 00:33:54.661 [2024-10-13 17:44:03.096045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-10-13 17:44:03.096328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.096339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.096614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.096798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.096809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.097074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.097384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.097395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.097697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.098014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.098025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.098336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.098652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.098662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.098996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.099307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.099318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.099624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.099936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.099946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.100271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.100566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.100576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.100888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.101200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.101211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.101489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.101828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.101838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.102131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.102454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.102464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.102770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.103083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.103093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.103397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.103593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.103604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.103892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.104181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.104192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.104493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.104808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.104819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.105115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.105404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.105415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.105723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.106073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.106086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.106420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.106739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.106750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.107076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.107383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.107394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.107660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.107808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.107820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.108076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.108412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.108423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.108757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.109093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.109104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.109414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.109698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.109709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.110015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.110330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.110340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.110487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.110773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.110786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.662 qpair failed and we were unable to recover it. 00:33:54.662 [2024-10-13 17:44:03.110978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.111247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-10-13 17:44:03.111258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.111580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.111895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.111905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.112213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.112527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.112538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.112827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.113115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.113126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.113298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.113599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.113610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.113919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.114110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.114120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.114448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.114782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.114792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.115096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.115325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.115335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.115699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.115992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.116003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.116298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.116472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.116483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.116742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.117079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.117091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.117398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.117725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.117735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.118030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.118402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.118413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.118716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.119052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.119071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.119348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.119663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.119673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.119992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.120342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.120353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.120630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.120947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.120957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.121282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.121594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.121604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.121907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.122226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.122238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.122547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.122860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.122871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.123155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.123346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.123356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.123653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.123945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.123956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.124277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.124589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.124600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.124906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.125220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.125231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.125540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.125834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.125844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.663 [2024-10-13 17:44:03.126127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.126413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-10-13 17:44:03.126424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.663 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.126725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.127036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.127047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.127359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.127653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.127663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.127840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.128111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.128121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.128421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.128738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.128748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.129056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.129251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.129263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.129535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.129824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.129834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.130164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.130361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.130371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.130699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.131010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.131020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.131218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.131538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.131548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.131851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.132189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.132200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.132532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.132851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.132862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.133164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.133476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.133486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.133788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.134102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.134112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.134422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.134722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.134732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.135008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.135317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.135328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.135629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.135946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.135956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.136233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.136560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.136570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.136875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.137175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.137185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.137414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.137692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.137703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.138004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.138290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.138302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.138614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.138927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.138938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.139235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.139558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.139569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.139752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.140059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.140075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.140393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.140706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.140716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.141067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.141379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.141392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.141698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.142054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.142069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.664 qpair failed and we were unable to recover it. 00:33:54.664 [2024-10-13 17:44:03.142398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.142738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-10-13 17:44:03.142748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.142936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.143253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.143264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.143567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.143845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.143856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.144158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.144329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.144340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.144658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.144971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.144981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.145184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.145483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.145493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.145797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.146091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.146102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.146420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.146734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.146744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.147076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.147390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.147403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.147614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.147951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.147962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.148230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.148522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.148532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.148834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.149146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.149158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.149441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.149758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.149769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.150070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.150360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.150370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.150672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.150982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.150992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.151296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.151496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.151506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.151798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.152114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.152125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.152328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.152661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.152671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.152939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.153261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.153271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.153578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.153913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.153923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.154249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.154561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.154571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.154805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.154984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.154995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.155367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.155680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.155691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.155959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.156280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.156291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.156608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.156928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.156939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.157263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.157468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.157479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.157777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.158094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.158104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.665 qpair failed and we were unable to recover it. 00:33:54.665 [2024-10-13 17:44:03.158431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.665 [2024-10-13 17:44:03.158746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.158756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.159036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.159388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.159399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.159699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.160010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.160021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.160325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.160639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.160650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.160952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.161268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.161278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.161584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.161918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.161929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.162237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.162452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-10-13 17:44:03.162462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-10-13 17:44:03.162765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.163016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.163028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.163332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.163635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.163646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.163816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.164020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.164031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.164339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.164650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.164660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.164969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.165267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.165278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.165579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.165896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.165907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.166216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.166554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.166565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.166875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.167171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.167184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.167470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.167805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.167817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.168122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.168396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.168408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.168716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.169058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.169080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.169363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.169673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.169683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.169990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.170291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.170301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.170608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.170923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.170933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.171256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.171583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.171594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.171892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.172203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.172214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.936 qpair failed and we were unable to recover it. 00:33:54.936 [2024-10-13 17:44:03.172515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.172828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.936 [2024-10-13 17:44:03.172839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.173149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.173300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.173312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.173599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.173893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.173904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.174182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.174493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.174504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.174775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.175095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.175106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.175432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.175751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.175762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.176041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.176354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.176365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.176664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.176976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.176987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.177321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.177610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.177621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.177916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.178236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.178249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.178552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.178865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.178875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.179264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.179555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.179568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.179739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.180080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.180091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.180394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.180710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.180720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.181046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.181361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.181372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.181680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.181981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.181991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.182313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.182655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.182666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.182935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.183181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.183193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.183518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.183827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.183838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.184149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.184441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.184451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.184798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.185093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.185104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.185407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.185585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.185597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.185873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.186185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.186196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.186499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.186815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.186825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.187126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.187426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.187436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.187736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.188049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.188060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.937 [2024-10-13 17:44:03.188391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.188703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.937 [2024-10-13 17:44:03.188714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.937 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.189026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.189325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.189335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.189639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.189954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.189965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.190242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.190570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.190582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.190917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.191198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.191210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.191525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.191861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.191871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.192175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.192492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.192502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.192804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.193117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.193128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.193412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.193572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.193584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.193943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.194276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.194287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.194586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.194938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.194948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.195246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.195566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.195577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.195857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.196171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.196182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.196490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.196779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.196789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.197125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.197425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.197435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.197740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.198084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.198095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.198394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.198723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.198734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.199079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.199372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.199382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.199686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.199996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.200006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.200310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.200625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.200635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.200918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.201122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.201133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.201394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.201682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.201693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.201996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.202304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.202315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.202615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.202898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.202908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.203233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.203402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.203414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.203741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.204075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.204086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.204394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.204711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.938 [2024-10-13 17:44:03.204722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.938 qpair failed and we were unable to recover it. 00:33:54.938 [2024-10-13 17:44:03.204914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.205214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.205225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.205501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.205810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.205822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.206164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.206477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.206488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.206794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.207113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.207125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.207435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.207735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.207746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.208021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.208332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.208343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.208644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.208957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.208967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.209274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.209594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.209606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.209910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.210207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.210218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.210494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.210681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.210692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.211001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.211318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.211329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.211633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.211945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.211955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.212235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.212452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.212463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.212806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.213120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.213130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.213423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.213734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.213746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.214035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.214325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.214336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.214639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.214952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.214964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.215232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.215549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.215560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.215767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.216057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.216072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.216368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.216679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.216690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.216988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.217290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.217301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.217625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.217936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.217947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.218255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.218567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.218577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.218897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.219200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.219212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.219513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.219669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.219681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.219967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.220274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.220284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.939 [2024-10-13 17:44:03.220589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.220933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.939 [2024-10-13 17:44:03.220944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.939 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.221224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.221558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.221569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.221873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.222188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.222199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.222479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.222798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.222809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.223120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.223437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.223447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.223748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.224069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.224080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.224386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.224697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.224707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.224988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.225300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.225311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.225618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.225934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.225944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.226282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.226593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.226603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.226904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.227239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.227250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.227576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.227891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.227902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.228213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.228550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.228562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.228865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.229178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.229189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.229501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.229838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.229849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.230188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.230355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.230367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.230719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.230911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.230922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.231234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.231555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.231566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.231867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.232181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.232191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.232473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.232791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.232801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.233106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.233429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.233440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.233782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.234093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.234104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.234417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.234710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.234721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.235011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.235225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.235236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.235388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.235689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.235701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.235998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.236229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.236240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.236550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.236750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.236761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.940 [2024-10-13 17:44:03.237081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.237385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.940 [2024-10-13 17:44:03.237395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.940 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.237697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.238011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.238022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.238360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.238678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.238690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.238985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.239170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.239182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.239475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.239798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.239809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.240106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.240434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.240447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.240716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.240877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.240888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.241189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.241500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.241510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.241806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.242129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.242141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.242419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.242730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.242740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.243013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.243348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.243359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.243662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.243975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.243987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.244296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.244613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.244624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.244927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.245242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.245253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.245597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.245889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.245899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.246206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.246515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.246526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.246863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.247177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.247189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.247495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.247800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.247810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.248116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.248440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.248450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.248762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.249073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.249084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.249370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.249689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.249699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.250002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.250168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.250180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.250492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.250830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.250842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.941 qpair failed and we were unable to recover it. 00:33:54.941 [2024-10-13 17:44:03.251144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.251455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.941 [2024-10-13 17:44:03.251466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.251744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.252089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.252100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.252402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.252707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.252717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.253021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.253331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.253342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.253558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.253823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.253834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.254145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.254446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.254457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.254766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.255085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.255096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.255409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.255699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.255709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.256020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.256355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.256366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.256643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.256958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.256969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.257239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.257578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.257589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.257892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.258219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.258230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.258530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3421050 Killed "${NVMF_APP[@]}" "$@" 00:33:54.942 [2024-10-13 17:44:03.258847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.258858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.259191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 17:44:03 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:33:54.942 [2024-10-13 17:44:03.259507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.259518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 17:44:03 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:54.942 [2024-10-13 17:44:03.259829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 17:44:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:54.942 [2024-10-13 17:44:03.260124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.260135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 17:44:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:54.942 [2024-10-13 17:44:03.260420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 17:44:03 -- common/autotest_common.sh@10 -- # set +x 00:33:54.942 [2024-10-13 17:44:03.260754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.260765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.261071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.261361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.261372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.261673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.261987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.261997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.262201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.262468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.262479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.262785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.263102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.263114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.263441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.263778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.263789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.264118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.264453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.264465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.264772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.265083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.265095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.265436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.265698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.265710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.266091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.266352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.266364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.266695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.266889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 [2024-10-13 17:44:03.266901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.942 qpair failed and we were unable to recover it. 00:33:54.942 [2024-10-13 17:44:03.267182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.942 17:44:03 -- nvmf/common.sh@469 -- # nvmfpid=3422063 00:33:54.943 [2024-10-13 17:44:03.267520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.267533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 17:44:03 -- nvmf/common.sh@470 -- # waitforlisten 3422063 00:33:54.943 [2024-10-13 17:44:03.267841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 17:44:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:54.943 17:44:03 -- common/autotest_common.sh@819 -- # '[' -z 3422063 ']' 00:33:54.943 [2024-10-13 17:44:03.268132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.268144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 17:44:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.943 [2024-10-13 17:44:03.268472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 17:44:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:54.943 [2024-10-13 17:44:03.268792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.268803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 17:44:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.943 17:44:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:54.943 [2024-10-13 17:44:03.269085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 17:44:03 -- common/autotest_common.sh@10 -- # set +x 00:33:54.943 [2024-10-13 17:44:03.269285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.269298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.269579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.269910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.269922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.270188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.270530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.270542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.270737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.271036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.271048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.271392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.271584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.271598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.271908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.272234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.272248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.272545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.272775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.272787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.273103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.273446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.273458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.273762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.274077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.274090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.274283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.274601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.274613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.274920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.275228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.275240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.275556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.275630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.275643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.275954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.276289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.276303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.276609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.276923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.276935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.277281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.277588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.277600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.277819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.277996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.278009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.278314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.278615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.278628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.278830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.279196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.279208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.279495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.279840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.279852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.280180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.280486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.280498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.280891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.281198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.281211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.943 [2024-10-13 17:44:03.281530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.281848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.943 [2024-10-13 17:44:03.281862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.943 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.282176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.282488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.282500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.282842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.283167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.283178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.283387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.283699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.283710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.284018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.284354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.284366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.284688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.285019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.285030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.285206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.285418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.285430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.285610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.285930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.285942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.286232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.286561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.286573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.286906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.287186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.287198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.287382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.287584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.287595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.287922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.288194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.288206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.288367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.288683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.288695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.289037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.289369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.289380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.289573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.289915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.289926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.290229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.290574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.290584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.290902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.291232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.291244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.291551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.291750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.291762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.292084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.292438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.292449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.292741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.292940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.292952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.293247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.293577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.293588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.293781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.294076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.294087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.294453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.294786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.294798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.295134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.295302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.295314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.295646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.295908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.295919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.296221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.296524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.296536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.296812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.297136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.297147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.944 [2024-10-13 17:44:03.297361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.297670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.944 [2024-10-13 17:44:03.297681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.944 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.297803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.297957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.297967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.298257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.298586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.298599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.298908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.299239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.299250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.299555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.299760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.299772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.300084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.300408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.300419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.300759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.301127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.301138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.301308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.301628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.301640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.301926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.302258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.302270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.302637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.302926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.302939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.303264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.303603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.303614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.303927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.304326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.304338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.304643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.304968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.304978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.305334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.305640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.305651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.305970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.306295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.306307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.306592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.306942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.306953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.307280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.307604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.307615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.307934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.308032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.308043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.308373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.308708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.308719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.309041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.309367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.309379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.309667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.309872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.309884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.310212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.310584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.310596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.310912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.311120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.311132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.311379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.311734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.311746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.312084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.312291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.312305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.312634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.312811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.312822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.313122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.313429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.313441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.945 qpair failed and we were unable to recover it. 00:33:54.945 [2024-10-13 17:44:03.313612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.313891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.945 [2024-10-13 17:44:03.313902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.314194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.314515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.314525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.314704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.314990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.315001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.315319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.315643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.315654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.315977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.316308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.316320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.316630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.316954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.316966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.317280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.317533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.317544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.317861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.318175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.318186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.318394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.318735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.318745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.318906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.319107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.319119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.319308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.319591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.319603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.319918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.320221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.320233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.320540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.320864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.320875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.321167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.321494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.321506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.321825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.322180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.322190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.322506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.322854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.322865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.323195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.323198] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:54.946 [2024-10-13 17:44:03.323263] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.946 [2024-10-13 17:44:03.323516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.323527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.323719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.324041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.324052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.324412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.324709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.324721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.325111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.325393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.325406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.325789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.325986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.325998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.326302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.326619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.326630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.326951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.327270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.327281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.946 qpair failed and we were unable to recover it. 00:33:54.946 [2024-10-13 17:44:03.327602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.946 [2024-10-13 17:44:03.327934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.327946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.328230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.328556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.328568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.328801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.329146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.329158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.329492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.329815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.329827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.330044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.330428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.330440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.330614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.330959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.330971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.331281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.331568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.331580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.331880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.332193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.332205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.332503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.332851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.332863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.333057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.333362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.333374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.333660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.333986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.333997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.334347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.334544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.334558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.334759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.334953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.334965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.335308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.335506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.335519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.335851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.336193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.336205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.336512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.336835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.336847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.337136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.337321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.337333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.337662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.337979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.337990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.338180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.338452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.338464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.338772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.339089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.339101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.339279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.339610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.339621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.339955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.340284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.340296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.340592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.340911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.340923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.341230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.341539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.341550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.341735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.342059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.342084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.342425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.342744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.342755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.343045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.343368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.343379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.947 qpair failed and we were unable to recover it. 00:33:54.947 [2024-10-13 17:44:03.343563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.947 [2024-10-13 17:44:03.343881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.343891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.344199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.344516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.344526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.344825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.345033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.345044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.345383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.345707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.345719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.345900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.346246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.346257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.346558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.346876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.346887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.347195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.347509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.347519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.347803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.347998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.348009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.348341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.348669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.348679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.348958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.349162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.349173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.349501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.349831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.349841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.350184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.350372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.350382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.350697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.351022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.351032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.351336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.351652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.351663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.351980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.352300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.352311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.352503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.352804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.352815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.353125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.353474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.353484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.353674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.354070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.354081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.354408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.354595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.354605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.354981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.355241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.355252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.355556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.355897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.355908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.356080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.356416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.356426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.356738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.357056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.357072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.357264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.357577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.357588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.357906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.358252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.358264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.358574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.948 [2024-10-13 17:44:03.358891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.358901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.359212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.359532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.948 [2024-10-13 17:44:03.359543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.948 qpair failed and we were unable to recover it. 00:33:54.948 [2024-10-13 17:44:03.359874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.360192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.360204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.360437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.360742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.360753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.361067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.361347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.361359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.361693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.362024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.362036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.362342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.362662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.362673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.362905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.363183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.363194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.363342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.363630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.363641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.363944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.364242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.364253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.364586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.364907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.364917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.365237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.365573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.365584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.365897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.366230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.366241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.366432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.366762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.366772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.367104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.367448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.367459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.367766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.368110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.368121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.368295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.368641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.368651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.368957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.369260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.369271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.369605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.369934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.369945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.370247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.370463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.370474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.370721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.370906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.370916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.371229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.371556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.371567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.371901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.372059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.372086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.372395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.372710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.372721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.373018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.373188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.373199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.373481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.373815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.373826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.373993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.374364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.374375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.374684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.374910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.374921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.949 [2024-10-13 17:44:03.375309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.375608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.949 [2024-10-13 17:44:03.375618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.949 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.375933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.376233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.376244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.376558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.376857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.376867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.377184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.377516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.377526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.377885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.378226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.378237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.378550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.378810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.378824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.379102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.379258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.379268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.379440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.379709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.379720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.379914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.380217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.380228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.380405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.380662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.380672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.381012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.381347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.381358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.381668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.381902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.381912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.382239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.382427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.382439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.382755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.383075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.383086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.383403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.383715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.383725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.384031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.384378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.384389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.384694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.384995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.385005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.385374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.385714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.385726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.386055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.386373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.386384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.386694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.386853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.386865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.387172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.387516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.387527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.387833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.388172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.388183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.388514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.388856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.388867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.389086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.389380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.389391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.389697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.390017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.390027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.390345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.390670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.390681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.390972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.391318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.391328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.950 [2024-10-13 17:44:03.391639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.391987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.950 [2024-10-13 17:44:03.391998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.950 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.392310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.392630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.392642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.392947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.393183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.393194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.393390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.393729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.393739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.393924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.394249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.394260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.394546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.394855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.394866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.395034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.395271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.395282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.395451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.395641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.395651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.395952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.396286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.396297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.396571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.396910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.396921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.397109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.397407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.397418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.397695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.398036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.398047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.398351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.398550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.398561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.398940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.399231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.399243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.399547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.399737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.399748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.400088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.400424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.400434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.400740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.401056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.401072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.401420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.401714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.401725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.402038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.402380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.402390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.402611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.402912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.402923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.403178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.403379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.403389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.403750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.404071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.404082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.404401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.404575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.404585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.404881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.405199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.405210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.951 qpair failed and we were unable to recover it. 00:33:54.951 [2024-10-13 17:44:03.405510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.405725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.951 [2024-10-13 17:44:03.405737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.406044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.406371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.406382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.406677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.407002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.407013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.407203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.407541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.407552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.407864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.408057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.408073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.408398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.408725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.408738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.409041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.409225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.409236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.409533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.409848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.409858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.410165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.410499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.410509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.410824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.411130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.411141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.411453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.411746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.411756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.412041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.412346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.412357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.412696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.413017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.413028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.413353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.413584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.952 [2024-10-13 17:44:03.413695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.413705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.414022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.414366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.414377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.414716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.415071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.415086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.415431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.415756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.415767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.416072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.416385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.416396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.416704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.416866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.416876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.417191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.417537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.417548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.417864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.418186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.418198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.418518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.418722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.418733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.419049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.419402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.419413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.419616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.419891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.419902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.420236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.420518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.420529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.420846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.421175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.421186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.421530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.421856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.421867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.952 qpair failed and we were unable to recover it. 00:33:54.952 [2024-10-13 17:44:03.422109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.952 [2024-10-13 17:44:03.422315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.422325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.422612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.422927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.422938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.423252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.423539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.423550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.423865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.424043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.424055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.424378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.424549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.424561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.424856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.425039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.425050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.425402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.425699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.425710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.426019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.426333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.426345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.426628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.426975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.426988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.427310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.427492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.427504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.427811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.428171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.428184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.428491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.428805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.428817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.429001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.429225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.429237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.429548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.429864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.429874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.430193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.430532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.430543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.430723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.431055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.431072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.431316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.431622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.431632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.431934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.432232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.432244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.432552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.432827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.432839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.433128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.433463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.433474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.433751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.434071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.434083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.434402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.434716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.434726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.435030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.435341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.435352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.435544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.435856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.435867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.436171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.436503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.436514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.436821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.437127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.437138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.437455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.437807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.953 [2024-10-13 17:44:03.437817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.953 qpair failed and we were unable to recover it. 00:33:54.953 [2024-10-13 17:44:03.438129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.438441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.438452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.438608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.438904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.438915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.439228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.439541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.439551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.439860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.440174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.440186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.440493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.440833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.440844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.441167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.441498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.441508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.441709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.441985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.441995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.442298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.442624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.442635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.442938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.443069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:54.954 [2024-10-13 17:44:03.443194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.954 [2024-10-13 17:44:03.443204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.954 [2024-10-13 17:44:03.443213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.954 [2024-10-13 17:44:03.443231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.443242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.443406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:54.954 [2024-10-13 17:44:03.443583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.443587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:54.954 [2024-10-13 17:44:03.443745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:54.954 [2024-10-13 17:44:03.443747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:54.954 [2024-10-13 17:44:03.443902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.443912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.444103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.444321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.444332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.444652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.444864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.444876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.445199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.445398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.445409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.445608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.445793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.445804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.446007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.446194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.446204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.446383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.446660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.446671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.446866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.447178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.447189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.447494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.447845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.447855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.448082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.448426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.448437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.448619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.448952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.448963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.449145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.449338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.449352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.449547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.449833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.449843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.450047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.450260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.954 [2024-10-13 17:44:03.450271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:54.954 qpair failed and we were unable to recover it. 00:33:54.954 [2024-10-13 17:44:03.450581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.450878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.450890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.451193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.451490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.451501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.451676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.452000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.452011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.452343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.452662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.452672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.452866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.453074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.453087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.453398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.453715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.453725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.454070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.454385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.454396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.454567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.454842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.454859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.455258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.455326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.455336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.455641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.455960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.455971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.456161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.456429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.456440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.456607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.456887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.456898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.457174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.457489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.457499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.457809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.458149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.458160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.458208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.458512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.458522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.458726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.458910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.458921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.459101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.459447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.459458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.459609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.459800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.459810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.460128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.460416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.460427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.460607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.460924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.460936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.227 qpair failed and we were unable to recover it. 00:33:55.227 [2024-10-13 17:44:03.461143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.461475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.227 [2024-10-13 17:44:03.461485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.461805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.462156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.462168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.462341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.462614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.462625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.462932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.463102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.463112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.463284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.463475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.463486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.463667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.463860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.463871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.464204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.464388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.464399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.464719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.465068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.465080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.465398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.465722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.465733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.465975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.466275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.466286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.466577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.466902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.466914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.467226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.467572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.467583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.467885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.468153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.468165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.468481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.468704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.468715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.469017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.469175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.469187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.469415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.469758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.469769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.469977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.470241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.470253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.470612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.470907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.470918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.471225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.471559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.471570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.471886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.472210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.472221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.472528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.472820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.472831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.473139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.473450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.473460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.473758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.474069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.474080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.474406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.474600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.474610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.474935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.475134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.475145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.475246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.475451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.475461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.475638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.475686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.475696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.228 qpair failed and we were unable to recover it. 00:33:55.228 [2024-10-13 17:44:03.475872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.228 [2024-10-13 17:44:03.476052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.476077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.476338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.476524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.476535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.476698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.476940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.476951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.477261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.477433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.477444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.477715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.477878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.477889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.478074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.478294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.478304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.478494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.478707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.478717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.479029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.479394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.479405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.479728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.480048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.480060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.480289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.480635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.480646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.480957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.481119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.481130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.481565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.481860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.481874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.482173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.482515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.482526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.482835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.483152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.483164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.483348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.483647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.483658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.483987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.484215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.484226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.484390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.484553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.484563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.484871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.485156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.485168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.485494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.485791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.485801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.486137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.486485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.486496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.486808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.487124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.487145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.487473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.487634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.487645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.487908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.488239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.488250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.488557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.488884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.488895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.489061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.489377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.489388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.489698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.490084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.490095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.490261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.490571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.229 [2024-10-13 17:44:03.490581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.229 qpair failed and we were unable to recover it. 00:33:55.229 [2024-10-13 17:44:03.490915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.491254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.491265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.491580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.491788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.491798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.492127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.492289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.492300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.492587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.492940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.492950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.493257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.493583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.493593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.493910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.494250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.494261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.494575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.494799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.494809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.495118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.495429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.495440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.495720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.496008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.496018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.496340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.496497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.496508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.496668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.496899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.496910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.497105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.497426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.497438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.497746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.498056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.498075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.498413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.498727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.498739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.499050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.499379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.499390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.499688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.499874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.499887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.500051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.500347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.500357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.500524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.500690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.500701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.500892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.501219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.501230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.501423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.501723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.501733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.502010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.502067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.502077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.502360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.502546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.502556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.502868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.503151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.503163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.503333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.503557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.503567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.503846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.504169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.504180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.504456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.504753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.504763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.505074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.505288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.230 [2024-10-13 17:44:03.505298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.230 qpair failed and we were unable to recover it. 00:33:55.230 [2024-10-13 17:44:03.505612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.505930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.505942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.506131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.506455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.506466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.506644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.506932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.506942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.507114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.507484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.507494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.507803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.508121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.508132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.508455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.508743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.508754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.509068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.509367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.509377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.509651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.509975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.509985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.510287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.510471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.510484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.510665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.510834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.510846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.511036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.511122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.511131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.511333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.511502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.511513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.511668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.511974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.511984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.512288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.512604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.512615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.512990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.513183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.513194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.513515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.513835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.513846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.514158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.514354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.514364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.514686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.514856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.514866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.515169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.515496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.515507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.515554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.515865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.515876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.516036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.516388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.516399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.516697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.516997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.517007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.517344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.517690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.517701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.518015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.518332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.518344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.518530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.518842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.518852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.519141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.519482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.519493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.231 qpair failed and we were unable to recover it. 00:33:55.231 [2024-10-13 17:44:03.519803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.231 [2024-10-13 17:44:03.520120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.520132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.520445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.520604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.520616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.520925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.521233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.521245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.521451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.521791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.521802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.522110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.522414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.522424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.522703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.523045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.523056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.523388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.523734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.523745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.524080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.524357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.524367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.524675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.524997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.525009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.525295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.525612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.525624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.525933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.526125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.526137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.526460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.526499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.526509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.526665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.526829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.526839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.527123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.527492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.527503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.527713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.528054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.528070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.528266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.528608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.528619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.528915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.529194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.529205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.529371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.529688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.529699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.529884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.530075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.530086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.530462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.530625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.530636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.530801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.531078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.531089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.531250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.531441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.531452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.531823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.532144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.532156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.232 qpair failed and we were unable to recover it. 00:33:55.232 [2024-10-13 17:44:03.532331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.232 [2024-10-13 17:44:03.532611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.532621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.532928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.533219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.533231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.533539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.533844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.533854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.534123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.534350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.534360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.534532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.534869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.534881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.535193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.535354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.535365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.535674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.536019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.536030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.536203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.536430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.536441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.536740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.536967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.536979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.537205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.537577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.537588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.537898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.538059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.538080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.538401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.538714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.538725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.539067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.539385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.539396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.539715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.540001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.540012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.540329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.540631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.540642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.540957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.541163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.541175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.541353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.541673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.541684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.541727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.541772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.541781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.541960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.542194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.542207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.542535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.542577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.542586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.542775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.543075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.543086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.543432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.543474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.543483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.543751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.544057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.544073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.544407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.544602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.544614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.544801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.544983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.544995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.545248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.545549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.545559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.545806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.545974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.233 [2024-10-13 17:44:03.545984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.233 qpair failed and we were unable to recover it. 00:33:55.233 [2024-10-13 17:44:03.546309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.546483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.546493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.546722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.546906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.546916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.547237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.547549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.547560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.547873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.548217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.548228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.548538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.548737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.548749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.549051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.549380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.549391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.549569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.549904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.549914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.550222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.550538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.550549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.550734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.551056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.551073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.551256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.551431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.551443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.551609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.551952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.551962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.552175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.552341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.552352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.552674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.552713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.552722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.553004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.553349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.553360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.553675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.553969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.553981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.554281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.554440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.554451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.554751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.554830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.554840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.555017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.555325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.555336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.555642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.555984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.555996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.556307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.556596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.556607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.556881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.557190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.557201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.557488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.557671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.557681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.557992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.558317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.558328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.558566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.558909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.558919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.559234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.559519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.559531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.559822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.560146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.560158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.560465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.560786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.560797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.234 qpair failed and we were unable to recover it. 00:33:55.234 [2024-10-13 17:44:03.561109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.234 [2024-10-13 17:44:03.561387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.561398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.561793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.561970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.561979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.562155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.562367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.562377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.562563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.562867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.562878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.563040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.563200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.563210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.563257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.563437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.563451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.563747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.563894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.563904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.564094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.564260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.564273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.564597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.564924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.564935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.565243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.565547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.565558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.565870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.566160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.566172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.566335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.566632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.566643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.566693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.566964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.566975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.567171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.567489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.567500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.567696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.568006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.568018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.568334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.568651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.568662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.568959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.569001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.569011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.569182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.569522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.569535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.569867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.570131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.570142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.570426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.570732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.570743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.570995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.571302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.571313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.571621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.571922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.571933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.572228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.572558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.572568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.572881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.573070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.573082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.573399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.573720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.573730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.574044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.574370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.574380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.574712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.574919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.574930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.235 qpair failed and we were unable to recover it. 00:33:55.235 [2024-10-13 17:44:03.575230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.235 [2024-10-13 17:44:03.575563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.575573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.575886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.576202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.576213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.576402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.576576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.576587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.576778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.576941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.576951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.577133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.577469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.577480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.577631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.577790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.577800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.577846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.578191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.578203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.578499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.578815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.578825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.579133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.579315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.579326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.579676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.579900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.579911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.580221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.580536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.580547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.580682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.580848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.580858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.580902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.581069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.581081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.581463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.581631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.581641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.581914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.582217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.582228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.582509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.582771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.582782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.582995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.583276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.583287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.583636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.583930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.583942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.584281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.584602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.584612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.584896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.585072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.585082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.585386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.585608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.585619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.585931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.586092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.586103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.586386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.586722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.586732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.586924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.587229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.587240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.587549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.587900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.587911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.588083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.588266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.236 [2024-10-13 17:44:03.588277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.236 qpair failed and we were unable to recover it. 00:33:55.236 [2024-10-13 17:44:03.588590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.588930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.588940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.589267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.589586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.589597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.589910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.590087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.590098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.590422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.590742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.590753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.591080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.591409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.591419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.591601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.591906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.591916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.592264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.592565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.592576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.592762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.592939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.592949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.593220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.593540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.593550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.593904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.594233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.594244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.594550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.594894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.594905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.595216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.595384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.595395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.595439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.595589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.595599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.595776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.596116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.596127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.596466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.596813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.596823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.597177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.597479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.597491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.597800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.598130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.598140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.598479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.598819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.598829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.599145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.599460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.599470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.599665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.600015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.600025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.600358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.600677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.600687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.600980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.601217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.601228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.601444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.601703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.601714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.602025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.602349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.602360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.602651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.602952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.602963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.237 qpair failed and we were unable to recover it. 00:33:55.237 [2024-10-13 17:44:03.603275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.237 [2024-10-13 17:44:03.603482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.603493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.603678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.603858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.603870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.604143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.604484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.604494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.604665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.604892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.604902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.605183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.605372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.605382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.605588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.605748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.605760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.605950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.606259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.606270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.606588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.606910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.606921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.607099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.607287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.607298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.607578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.607913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.607923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.608232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.608564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.608575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.608759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.608942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.608953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.609210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.609361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.609371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.609644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.609982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.609994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.610287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.610448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.610459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.610748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.610963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.610975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.611305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.611499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.611511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.611793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.612133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.612145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.612344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.612529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.612542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.612833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.613015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.613025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.613219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.613539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.613550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.613865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.614192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.614203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.614533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.614840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.614850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.615164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.615505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.615516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.615702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.615860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.615872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.615917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.615962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.615972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.616153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.616304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.616314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.238 qpair failed and we were unable to recover it. 00:33:55.238 [2024-10-13 17:44:03.616603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.238 [2024-10-13 17:44:03.616944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.616954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.617123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.617161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.617170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.617469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.617661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.617671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.617981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.618267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.618278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.618439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.618718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.618728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.619045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.619249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.619260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.619461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.619645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.619656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.619976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.620292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.620302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.620498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.620698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.620708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.621022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.621340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.621351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.621539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.621791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.621801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.622112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.622422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.622432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.622756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.623086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.623097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.623422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.623729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.623740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.624072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.624377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.624391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.624692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.624991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.625004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.625189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.625393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.625404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.625596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.625750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.625762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.626029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.626352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.626365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.626673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.627021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.627034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.627266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.627612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.627624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.627935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.628232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.628243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.628579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.628884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.628895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.629205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.629522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.629533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.629843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.630129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.630140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.630316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.630474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.630484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.630644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.630986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.630997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.239 qpair failed and we were unable to recover it. 00:33:55.239 [2024-10-13 17:44:03.631152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.631472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-10-13 17:44:03.631483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.631798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.632140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.632151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.632318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.632477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.632488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.632785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.633131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.633142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.633327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.633595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.633607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.633962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.634148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.634158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.634475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.634652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.634663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.634853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.635042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.635053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.635260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.635431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.635441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.635759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.635942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.635953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.636244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.636424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.636434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.636764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.636995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.637015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.637340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.637505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.637516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.637842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.638129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.638141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.638467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.638766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.638776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.639078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.639352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.639363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.639674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.639991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.640002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.640302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.640631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.640642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.640956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.641260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.641271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.641472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.641768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.641779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.642090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.642233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.642243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.642495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.642764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.642775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.642965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.643303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.643314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.643595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.643731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.643741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.644043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.644356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.644366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.644672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.644867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.644878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.645190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.645501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.645512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.240 qpair failed and we were unable to recover it. 00:33:55.240 [2024-10-13 17:44:03.645804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.645999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-10-13 17:44:03.646010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.646348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.646664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.646675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.646981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.647171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.647182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.647492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.647649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.647661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.647970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.648307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.648318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.648623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.648814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.648825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.649013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.649319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.649331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.649522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.649855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.649865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.650026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.650186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.650197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.650498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.650831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.650841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.651140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.651461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.651473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.651786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.651985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.651998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.652199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.652373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.652385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.652664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.652967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.652977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.653284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.653605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.653616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.653903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.654228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.654240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.654580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.654898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.654910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.655219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.655313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.655322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.655646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.655846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.655858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.656205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.656520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.656532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.656757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.657079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.657090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.657405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.657723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.657736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.658042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.658360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.658372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.658680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.658981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.658992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.241 qpair failed and we were unable to recover it. 00:33:55.241 [2024-10-13 17:44:03.659299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-10-13 17:44:03.659615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.659625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.659931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.660231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.660241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.660547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.660843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.660856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.661036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.661343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.661354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.661686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.661823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.661835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.662022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.662206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.662218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.662259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.662377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.662389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.662706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.663033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.663044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.663386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.663707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.663718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.664011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.664194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.664207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.664365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.664539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.664551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.664756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.665051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.665067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.665161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.665343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.665354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.665751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.665944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.665954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.666269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.666616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.666628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.666930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.667225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.667236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.667407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.667688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.667698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.667877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.668051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.668067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.668355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.668697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.668708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.668878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.669107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.669118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.669438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.669759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.669770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.669958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.670252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.670262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.670469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.670777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.670788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.671117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.671441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.671452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.671625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.671784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.671797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.671968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.672239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.672250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.672428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.672620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.672632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.242 qpair failed and we were unable to recover it. 00:33:55.242 [2024-10-13 17:44:03.672789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.242 [2024-10-13 17:44:03.672955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.672966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.673211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.673556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.673566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.673765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.674033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.674043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.674362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.674679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.674689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.675040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.675380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.675392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.675584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.675788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.675799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.676069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.676377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.676387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.676695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.676997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.677008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.677328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.677668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.677679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.678015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.678184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.678194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.678506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.678850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.678860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.679137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.679316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.679328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.679635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.679932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.679944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.680231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.680559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.680570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.680876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.681198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.681209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.681523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.681867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.681880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.682198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.682502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.682512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.682740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.683080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.683091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.683405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.683714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.683727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.684031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.684335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.684347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.684526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.684859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.684871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.685095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.685259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.685272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.685544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.685703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.685714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.686093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.686407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.686418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.686600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.686903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.686914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.687100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.687283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.687301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.687611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.687893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.687904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.243 qpair failed and we were unable to recover it. 00:33:55.243 [2024-10-13 17:44:03.688111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.243 [2024-10-13 17:44:03.688415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.688425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.688466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.688742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.688753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.688949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.689132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.689143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.689520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.689677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.689688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.689973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.690277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.690289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.690591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.690908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.690920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.691216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.691550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.691560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.691864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.692166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.692178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.692512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.692819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.692829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.693140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.693457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.693468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.693799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.694112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.694123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.694443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.694762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.694773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.695082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.695358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.695370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.695549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.695717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.695729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.696021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.696189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.696199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.696532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.696828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.696838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.697034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.697360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.697371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.697678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.697960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.697971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.698193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.698528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.698539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.698842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.699133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.699144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.699468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.699787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.699798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.699983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.700265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.700276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.700594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.700764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.700774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.701073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.701277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.701287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.701473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.701761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.701774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.702151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.702468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.702479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.702817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.702977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.702987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.244 qpair failed and we were unable to recover it. 00:33:55.244 [2024-10-13 17:44:03.703320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.244 [2024-10-13 17:44:03.703626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.703636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.703802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.703941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.703951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.704339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.704507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.704518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.704693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.704851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.704861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.705176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.705518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.705529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.705842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.706167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.706178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.706497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.706682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.706693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.706989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.707335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.707346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.707655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.707845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.707855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.708133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.708510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.708521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.708831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.709181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.709192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.709491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.709680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.709691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.709999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.710300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.710311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.710620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.710795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.710806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.711113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.711455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.711467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.711659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.711991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.712003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.712337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.712659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.712672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.713022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.713292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.713303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.713636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.713959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.713972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.714148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.714328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.714340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.714642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.714962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.714974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.715025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.715191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.715203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.715490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.715664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.715675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.715866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.716046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.716059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.716263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.716549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.716561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.716851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.717175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.717187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.717507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.717706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.245 [2024-10-13 17:44:03.717717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.245 qpair failed and we were unable to recover it. 00:33:55.245 [2024-10-13 17:44:03.717891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.718183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.718194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.718421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.718588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.718600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.718908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.718956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.718966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.719156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.719346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.719357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.719548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.719725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.719736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.720036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.720220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.720231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.720531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.720746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.720758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.721072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.721391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.721401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.721442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.721780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.721791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.722105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.722401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.722412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.722534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.722798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.722809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.723036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.723234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.723246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.723478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.723794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.723805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.724172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.724476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.724487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.724822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.725051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.725061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.725341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.725662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.725673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.725982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.726255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.726267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.726561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.726827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.726838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.727126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.727460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.727471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.727786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.728132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.728143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.728365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.728710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.728720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.729039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.729123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.729135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.729438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.729750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.729761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.730091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.730430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.246 [2024-10-13 17:44:03.730441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.246 qpair failed and we were unable to recover it. 00:33:55.246 [2024-10-13 17:44:03.730725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.730894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.730904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.731182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.731485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.731495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.731746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.731891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.731902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.732204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.732457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.732467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.732641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.732736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.732746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.733043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.733386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.733397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.733709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.734043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.734053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.734346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.734675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.734686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.734995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.735322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.735334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.735642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.735985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.735996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.736353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.736670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.736681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.736991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.737310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.737322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.737663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.737985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.737998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.738371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.738676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.738688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.739016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.739059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.739123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.739174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.739353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.247 [2024-10-13 17:44:03.739364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.247 qpair failed and we were unable to recover it. 00:33:55.247 [2024-10-13 17:44:03.739552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.739831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.739845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.740192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.740377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.740387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.740693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.740996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.741010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.741320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.741608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.741618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.741809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.741987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.741998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.742186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.742364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.742376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.742550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.742838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.742849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.743125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.743471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.743482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.743641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.743884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.743894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.744227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.744557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.744568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.744875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.745199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.745211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.745538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.745854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.745864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.746209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.746558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.746568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.746740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.746922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.746934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.747208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.747551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.747562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.747873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.748196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.748207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.748488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.748814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.748824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.749136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.749455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.749466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.520 [2024-10-13 17:44:03.749774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.750094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.520 [2024-10-13 17:44:03.750105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.520 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.750428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.750637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.750648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.750821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.750979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.750990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.751311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.751409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.751418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.751725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.751986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.751997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.752201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.752385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.752397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.752729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.752895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.752906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.753270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.753466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.753477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.753674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.753985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.753997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.754182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.754351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.754361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.754413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.754626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.754638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.754917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.755241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.755252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.755552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.755892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.755903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.756075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.756376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.756386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.756715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.757033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.757043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.757376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.757539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.757549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.757866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.758215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.758227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.758554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.758894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.758905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.759239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.759399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.759409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.759718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.760071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.760083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.760370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.760712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.760723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.761030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.761324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.761335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.761613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.761924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.761936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.762220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.762557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.762568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.762870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.763193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.763204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.763532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.763855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.763867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.764052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.764369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.764380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.521 [2024-10-13 17:44:03.764657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.764973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.521 [2024-10-13 17:44:03.764984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.521 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.765142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.765449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.765459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.765643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.765977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.765987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.766150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.766465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.766477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.766785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.767128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.767139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.767463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.767654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.767664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.768033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.768374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.768385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.768667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.768986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.768999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.769153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.769197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.769210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.769366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.769666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.769678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.769984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.770288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.770300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.770580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.770898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.770909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.771219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.771379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.771390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.771696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.772038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.772049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.772361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.772679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.772690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.772978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.773299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.773311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.773632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.773948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.773958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.774269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.774586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.774597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.774915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.775233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.775247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.775528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.775845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.775856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.776009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.776260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.776272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.776582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.776899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.776909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.777262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.777412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.777421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.777712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.778026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.778039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.778223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.778565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.778577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.778776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.779087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.779099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.779277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.779508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.779520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.522 [2024-10-13 17:44:03.779839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.780034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.522 [2024-10-13 17:44:03.780045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.522 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.780380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.780699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.780709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.781022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.781341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.781352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.781664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.781838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.781850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.782043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.782350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.782363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.782633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.782923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.782935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.783101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.783398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.783408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.783714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.784036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.784049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.784339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.784642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.784654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.784999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.785344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.785357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.785536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.785729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.785741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.786050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.786385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.786397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.786732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.787097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.787109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.787451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.787632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.787643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.787830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.788156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.788167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.788514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.788809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.788821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.789154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.789447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.789459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.789643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.789950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.789962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.790289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.790448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.790459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.790713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.790758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.790769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.790947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.790992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.791004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.791169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.791491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.791502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.791692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.792005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.792016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.792327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.792653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.792665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.793005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.793348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.793361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.793516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.793826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.793837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.794149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.794308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.794319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.523 [2024-10-13 17:44:03.794652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.794934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-10-13 17:44:03.794945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.523 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.795270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.795497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.795508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.795834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.796042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.796052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.796372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.796702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.796714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.797020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.797322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.797334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.797628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.797953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.797965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.798156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.798469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.798480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.798794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.799111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.799122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.799445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.799746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.799758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.800088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.800399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.800410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.800714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.801029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.801040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.801265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.801449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.801461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.801769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.802111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.802122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.802442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.802762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.802773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.803086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.803436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.803447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.803620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.803891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.803905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.804236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.804569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.804580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.804770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.805089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.805102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.805416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.805742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.805752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.805988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.806263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.806275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.806605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.806770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.806781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.806958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.807122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.807133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.807423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.807593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.807604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.807931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.808221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.808233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.808538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.808857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.808869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.809181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.809363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.809374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.809755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.810112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.810123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.524 [2024-10-13 17:44:03.810298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.810622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.524 [2024-10-13 17:44:03.810633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.524 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.810754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.811052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.811068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.811370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.811509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.811519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.811713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.811907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.811918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.812185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.812495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.812506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.812687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.812990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.813001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.813162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.813456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.813468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.813628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.813785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.813797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.814108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.814430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.814442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.814753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.815071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.815084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.815324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.815465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.815477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.815646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.815987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.815999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.816330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.816673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.816685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.816994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.817079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.817090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.817375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.817701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.817715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.818026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.818227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.818240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.818409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.818699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.818710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.819018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.819326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.819338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.819673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.819991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.820003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.820198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.820426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.820437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.820746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.821069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.821082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.821392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.821751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.821762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.822102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.822435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.822448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.525 [2024-10-13 17:44:03.822758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.823079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.525 [2024-10-13 17:44:03.823092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.525 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.823406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.823633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.823645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.823844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.824186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.824198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.824391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.824689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.824702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.825012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.825179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.825191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.825486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.825681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.825693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.825875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.826054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.826083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.826386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.826707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.826720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.826870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.827202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.827215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.827527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.827843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.827855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.828084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.828427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.828440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.828655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.828968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.828980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.829165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.829357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.829368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.829668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.829853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.829865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.830120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.830168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.830177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.830465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.830787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.830799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.831106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.831412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.831426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.831698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.832009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.832022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.832354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.832650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.832660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.832842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.833145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.833156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.833517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.833765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.833777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.834095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.834438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.834450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.834800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.834960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.834970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.835260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.835588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.835599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.835914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.836228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.836240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.836490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.836673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.836684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.836874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.837231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.837242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.526 [2024-10-13 17:44:03.837541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.837884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.526 [2024-10-13 17:44:03.837895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.526 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.838208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.838559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.838570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.838871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.839188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.839200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.839514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.839811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.839821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.840127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.840471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.840482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.840805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.841124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.841135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.841440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.841635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.841646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.842001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.842190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.842201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.842508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.842699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.842711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.843081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.843383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.843394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.843705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.844026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.844036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.844215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.844392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.844402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.844701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.845025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.845036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.845355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.845555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.845566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.845755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.845957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.845968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.846150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.846461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.846472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.846808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.847103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.847114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.847448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.847642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.847653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.848019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.848336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.848347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.848626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.848972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.848983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.849181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.849489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.849500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.849810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.849999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.850009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.850309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.850495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.850507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.850832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.851094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.851105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.851307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.851484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.851496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.851756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.852051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.852068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.852357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.852645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.852657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.527 [2024-10-13 17:44:03.852964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.853130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.527 [2024-10-13 17:44:03.853141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.527 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.853486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.853643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.853654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.853923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.854269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.854281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.854669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.854855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.854867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.855176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.855506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.855517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.855728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.856076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.856087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.856391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.856697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.856709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.857020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.857347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.857359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.857655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.857980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.857992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.858290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.858475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.858488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.858794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.859113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.859126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.859488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.859814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.859825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.860012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.860303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.860315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.860606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.860927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.860941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.861254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.861555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.861567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.861880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.862180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.862192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.862498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.862842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.862853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.863186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.863501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.863512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.863820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.864140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.864151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.864462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.864770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.864782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.865098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.865384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.865396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.865714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.866026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.866037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.866396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.866698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.866708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.867018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.867388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.867402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.867546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.867714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.867725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.867972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.868134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.868144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.868428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.868745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.868755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.528 qpair failed and we were unable to recover it. 00:33:55.528 [2024-10-13 17:44:03.869086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.528 [2024-10-13 17:44:03.869283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.869294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.869456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.869611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.869621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.869922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.870218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.870230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.870418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.870761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.870771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.870955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.871297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.871309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.871500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.871781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.871793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.871961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.872125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.872136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.872316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.872474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.872485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.872795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.873113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.873124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.873293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.873494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.873504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.873811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.874152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.874165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.874469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.874795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.874806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.875112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.875455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.875466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.875852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.876145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.876158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.876487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.876821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.876831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.877172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.877458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.877468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.877777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.878099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.878111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.878273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.878429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.878439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.878742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.879070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.879083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.879240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.879566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.879577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.879889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.880213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.880225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.880525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.880844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.880854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.881157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.881493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.881504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.881811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.882160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.882172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.882483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.882801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.882812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.883116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.883433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.883443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.883722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.884070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.529 [2024-10-13 17:44:03.884081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.529 qpair failed and we were unable to recover it. 00:33:55.529 [2024-10-13 17:44:03.884290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.884610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.884621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.884933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.885219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.885231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.885524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.885715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.885726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.886042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.886367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.886378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.886686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.887000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.887012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.887322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.887633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.887644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.887806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.887973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.887984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.888024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.888346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.888358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.888658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.888969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.888981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.889276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.889437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.889449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.889759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.890103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.890114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.890430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.890749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.890760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.891096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.891417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.891429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.891747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.891934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.891945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.892230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.892562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.892573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.892859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.893166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.893177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.893486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.893803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.893813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.894117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.894458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.894469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.894739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.895050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.895061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.895403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.895690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.895701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.896087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.896378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.896392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.896691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.897034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.897044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.897219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.897558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.897569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.897741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.897935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.530 [2024-10-13 17:44:03.897948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.530 qpair failed and we were unable to recover it. 00:33:55.530 [2024-10-13 17:44:03.898227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.898563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.898574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.898886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.899205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.899216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.899523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.899877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.899888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.900203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.900388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.900398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.900582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.900762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.900773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.900937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.901123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.901134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.901183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.901414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.901425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.901612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.901805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.901815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.902102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.902446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.902457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.902760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.903074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.903085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.903367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.903659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.903670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.903718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.904033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.904045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.904251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.904625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.904638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.904821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.905109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.905121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.905480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.905823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.905834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.906025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.906177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.906189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.906461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.906772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.906782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.907111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.907340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.907351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.907660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.907940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.907951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.908341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.908635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.908645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.908945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.909244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.909255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.909574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.909734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.909744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.910006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.910334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.910345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.910625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.910792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.910803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.911079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.911415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.911427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.911735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.912054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.912071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.912377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.912693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.531 [2024-10-13 17:44:03.912704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.531 qpair failed and we were unable to recover it. 00:33:55.531 [2024-10-13 17:44:03.912992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.913221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.913232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.913497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.913717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.913728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.914038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.914381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.914392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.914701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.915021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.915032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.915193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.915380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.915390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.915707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.916036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.916047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.916363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.916551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.916562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.916869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.917052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.917067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.917397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.917716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.917728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.918035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.918368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.918379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.918635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.918950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.918962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.919232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.919561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.919572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.919901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.920195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.920206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.920368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.920689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.920700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.921002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.921307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.921317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.921625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.921947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.921959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.922273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.922369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.922381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.922554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.922740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.922751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.922943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.923250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.923261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.923591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.923912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.923923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.924132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.924398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.924412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.924746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.925098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.925109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.925314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.925552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.925562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.925769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.926048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.926059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.926369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.926699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.926710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.926757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.926867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.926877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.927072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.927385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.927396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.532 qpair failed and we were unable to recover it. 00:33:55.532 [2024-10-13 17:44:03.927549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.532 [2024-10-13 17:44:03.927710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.927721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.928025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.928239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.928250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.928581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.928876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.928887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.929210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.929568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.929579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.929892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.930052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.930067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.930227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.930522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.930532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.930820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.931130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.931142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.931318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.931496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.931515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.931836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.932135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.932147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.932472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.932796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.932809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.933145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.933431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.933442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.933759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.934077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.934088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.934295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.934669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.934679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.934984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.935272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.935283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.935594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.935756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.935767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.936147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.936454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.936465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.936773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.937091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.937102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.937436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.937594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.937604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.937789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.938070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.938081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.938463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.938765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.938776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.939091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.939411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.939422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.939786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.940037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.940049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.940113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.940227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.940237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.940480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.940795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.940807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.941112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.941347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.941357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.941549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.941862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.941872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.533 qpair failed and we were unable to recover it. 00:33:55.533 [2024-10-13 17:44:03.942039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.533 [2024-10-13 17:44:03.942232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.942242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.942576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.942874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.942886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.943222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.943445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.943455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.943797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.943992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.944002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.944192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.944502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.944513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.944692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.944877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.944888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.945251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.945530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.945542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.945823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.946164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.946174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.946498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.946704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.946714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.947002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.947198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.947209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.947538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.947865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.947877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.948070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.948382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.948393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.948733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.949069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.949081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.949408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.949760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.949770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.950006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.950192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.950203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.950476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.950656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.950668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.950858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.951014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.951025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.951347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.951594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.951605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.951928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.952108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.952121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.952443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.952651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.952661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.952978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.953286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.953297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.953342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.953528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.953538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.953719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.954037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.954048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.954297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.954641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.954651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.534 qpair failed and we were unable to recover it. 00:33:55.534 [2024-10-13 17:44:03.954951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.955274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.534 [2024-10-13 17:44:03.955285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.955463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.955790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.955802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.956110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.956338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.956348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.956694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.956843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.956853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.957051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.957385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.957396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.957587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.957890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.957901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.958058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.958420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.958430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.958624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.958786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.958798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.958974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.959283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.959304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.959499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.959845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.959856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.959991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.960269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.960281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.960459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.960784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.960794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.961108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.961453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.961463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.961650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.961979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.961989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.962208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.962553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.962564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.962925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.963163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.963174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.963412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.963731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.963742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.964050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.964268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.964280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.964590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.964889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.964900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.965103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.965400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.965411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.965723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.966043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.966053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.966406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.966576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.966588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.966858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.967170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.967181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.967386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.967733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.967744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.968059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.968247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.968258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.968585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.968770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.968781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.968969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.969310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.969321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.535 qpair failed and we were unable to recover it. 00:33:55.535 [2024-10-13 17:44:03.969657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.535 [2024-10-13 17:44:03.969990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.970001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.970193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.970539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.970550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.970711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.970925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.970938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.971104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.971377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.971388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.971723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.971897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.971906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.972247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.972414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.972424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.972697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.972876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.972887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.973077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.973272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.973283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.973329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.973489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.973500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.973706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.974027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.974038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.974208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.974372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.974382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.974758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.975112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.975123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.975316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.975498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.975508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.975827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.976132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.976143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.976459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.976813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.976824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.977143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.977508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.977519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.977833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.978207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.978218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.978503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.978822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.978832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.979186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.979541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.979555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.979865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.980169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.980180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.980362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.980671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.980682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.980860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.981175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.981186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.981400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.981698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.981709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.981759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.981934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.981944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.982305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.982625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.982636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.982809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.983108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.983120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.983329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.983630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.536 [2024-10-13 17:44:03.983641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.536 qpair failed and we were unable to recover it. 00:33:55.536 [2024-10-13 17:44:03.983812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.983974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.983985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.984288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.984657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.984671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.985007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.985333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.985344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.985675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.986010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.986022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.986337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.986758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.986768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.987095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.987427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.987437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.987795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.987964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.987974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.988179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.988495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.988511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.988911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.989190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.989200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.989385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.989744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.989753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.989924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.990112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.990123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.990442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.990747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.990758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.990952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.991244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.991255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.991582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.991918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.991928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.992180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.992474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.992484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.992797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.992991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.993001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.993350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.993526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.993536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.993705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.993981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.993990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.994185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.994380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.994390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.994571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.994771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.994780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.994984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.995358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.995368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.995562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.995916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.995925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.996131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.996531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.996541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.996591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.996881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.996902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.997052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.997094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.997104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.997314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.997504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.997513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.997828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.998066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.537 [2024-10-13 17:44:03.998076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.537 qpair failed and we were unable to recover it. 00:33:55.537 [2024-10-13 17:44:03.998450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:03.998755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:03.998764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:03.998954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:03.999230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:03.999242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:03.999617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:03.999913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:03.999923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.000309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.000604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.000614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.000952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.001261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.001271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.001448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.001768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.001778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.002128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.002450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.002460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.002756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.003058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.003075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.003396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.003704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.003714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.004048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.004366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.004376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.004672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.004974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.004985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.005380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.005683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.005693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.005990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.006357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.006367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.006660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.006968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.006977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.007279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.007636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.007645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.007867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.008199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.008210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.008515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.008710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.008720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.008899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.009219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.009229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.009578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.009780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.009789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.010095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.010342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.010353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.010540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.010840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.010850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.011187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.011522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.011531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.011694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.011977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.011986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.012342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.012532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.012543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.012706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.012907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.012917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.013036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.013240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.013253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.013444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.013728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.538 [2024-10-13 17:44:04.013738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.538 qpair failed and we were unable to recover it. 00:33:55.538 [2024-10-13 17:44:04.014109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.014432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.014441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.014548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.014844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.014853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.015185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.015519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.015529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.015738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.016094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.016104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.016447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.016669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.016679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.017104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.017425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.017435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.017629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.017990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.018000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.018335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.018644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.018654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.018979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.019274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.019285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.019572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.019900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.019909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.020207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.020505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.020514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.020827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.021034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.021045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.021398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.021567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.021577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.021900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.022208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.022218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.022513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.022848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.022857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.023055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.023422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.023433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.023724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.023924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.023935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.024268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.024616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.024627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.024868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.025026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.025035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.025356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.025693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.025703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.025992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.026266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.026276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.026624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.026956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.539 [2024-10-13 17:44:04.026965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.539 qpair failed and we were unable to recover it. 00:33:55.539 [2024-10-13 17:44:04.027335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.027664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.027673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.027857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.028149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.028160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.028462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.028539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.028548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.028877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.029176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.029186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.029343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.029564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.029574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.029786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.030084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.030095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.030441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.030818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.030827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.031169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.031479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.031488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.031811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.032024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.032034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.032218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.032558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.032567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.032867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.033080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.540 [2024-10-13 17:44:04.033091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.540 qpair failed and we were unable to recover it. 00:33:55.540 [2024-10-13 17:44:04.033485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.033873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.033883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.034181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.034517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.034528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.034727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.035069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.035079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.035403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.035738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.035748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.036080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.036380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.036390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.036587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.036864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.036874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.037227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.037527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.037536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.037828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.038129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.038140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.038469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.038689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.038699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.039129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.039334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.039344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-10-13 17:44:04.039534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.039857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.808 [2024-10-13 17:44:04.039866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.040068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.040377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.040387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.040685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.040736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.040746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.040881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.041032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.041041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.041226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.041437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.041446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.041852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.042025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.042034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.042287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.042561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.042576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.042756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.043090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.043100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.043470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.043776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.043787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.044083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.044376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.044386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.044715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.045011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.045020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.045382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.045696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.045707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.045878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.046076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.046087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.046279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.046625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.046635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.046957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.047336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.047347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.047650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.048017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.048027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.048373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.048664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.048674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.049033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.049425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.049436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.049622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.049806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.049816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.050024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.050185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.050196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.050496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.050812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.050823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.051156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.051477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.051487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.051664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.051901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.051912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.052081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.052314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.052323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.052364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.052700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.052711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.053046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.053228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.053238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.053433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.053596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.053606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-10-13 17:44:04.053800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.809 [2024-10-13 17:44:04.053996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.054007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.054185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.054540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.054551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.054875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.055212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.055223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.055516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.055823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.055833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.056133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.056487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.056496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.056808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.057124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.057135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.057468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.057686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.057696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.058044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.058276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.058287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.058579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.058764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.058774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.059206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.059557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.059568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.059874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.060234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.060246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.060541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.060836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.060846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.061148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.061445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.061455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.061767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.061938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.061947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.062248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.062463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.062473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.062790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.063077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.063088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.063456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.063661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.063671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.063944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.064235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.064245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.064418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.064706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.064716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.064900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.064943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.064952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.065302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.065636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.065646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.065977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.066164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.066174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.066526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.066681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.066692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.067067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.067387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.067398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.067560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.067850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.067860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.068181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.068475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.068485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-10-13 17:44:04.068786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.068968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.810 [2024-10-13 17:44:04.068979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.069149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.069334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.069344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.069526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.069714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.069724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.069768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.069880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.069889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.070174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.070532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.070544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.070740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.071030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.071040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.071320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.071668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.071678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.071863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.072052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.072068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.072412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.072738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.072748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.073073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.073404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.073414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.073757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.073930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.073940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.074164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.074351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.074360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.074569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.074849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.074858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.075075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.075353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.075363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.075704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.075947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.075959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.076140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.076512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.076522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.076862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.077218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.077229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.077576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.077878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.077887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.078180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.078531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.078541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.078854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.079035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.079045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.079279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.079672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.079682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.079973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.080178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.080188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.080479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.080782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.080793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.081130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.081444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.081454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.081774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.082131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.082141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.082341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.082560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.082570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.082763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.083073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.083083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.811 qpair failed and we were unable to recover it. 00:33:55.811 [2024-10-13 17:44:04.083315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.811 [2024-10-13 17:44:04.083495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.083505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.083794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 17:44:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:55.812 [2024-10-13 17:44:04.083910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.083920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.084083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 17:44:04 -- common/autotest_common.sh@852 -- # return 0 00:33:55.812 [2024-10-13 17:44:04.084249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.084259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.084339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.084523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.084533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 17:44:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:55.812 [2024-10-13 17:44:04.084728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 17:44:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:55.812 [2024-10-13 17:44:04.084890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.084900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.812 [2024-10-13 17:44:04.085067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.085145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.085155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.085336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.085513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.085522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.085939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.086142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.086153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.086464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.086760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.086770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.086944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.087244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.087254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.087414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.087694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.087703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.087878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.088172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.088182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.088370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.088674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.088684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.088961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.089290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.089300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.089608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.089937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.089946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.090260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.090627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.090638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.090956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.091251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.091261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.091522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.091820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.091835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.092132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.092449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.092459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.092760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.093059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.093076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.093451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.093814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.093824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.094110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.094490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.094500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.094739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.095018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.095029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.095397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.095708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.095717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.095878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.096235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.096245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.096559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.096942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.812 [2024-10-13 17:44:04.096952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.812 qpair failed and we were unable to recover it. 00:33:55.812 [2024-10-13 17:44:04.097213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.097402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.097412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.097457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.097643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.097655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.097876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.098179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.098190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.098362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.098685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.098694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.099001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.099366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.099376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.099711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.100054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.100069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.100380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.100678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.100688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.100855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.101169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.101179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.101555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.101863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.101874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.101947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.102112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.102122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.102451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.102632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.102641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.102813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.103131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.103141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.103460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.103765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.103775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.104067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.104255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.104265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.104568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.104854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.104865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.105080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.105301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.105311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.105504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.105832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.105842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.106135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.106471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.106481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.106665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.107009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.107020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.107360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.107653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.107664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.107995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.108315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.108325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.108640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.108806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.108816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.813 [2024-10-13 17:44:04.109130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.109452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.813 [2024-10-13 17:44:04.109463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.813 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.109622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.109801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.109811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.109991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.110033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.110044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.110216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.110413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.110424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.110714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.111010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.111020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.111248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.111456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.111465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.111836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.112035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.112045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.112370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.112567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.112577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.112905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.113237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.113248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.113532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.113750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.113760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.114103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.114505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.114516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.114760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.115087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.115097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.115423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.115765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.115776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.115939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.116251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.116262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.116592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.116921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.116932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.117238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.117547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.117556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.117744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.118070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.118081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.118243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.118533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.118543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.118850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.119178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.119188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.119524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.119872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.119881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.120001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.120245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.120256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.120408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.120449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.120460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.120639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.120818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.120828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.121165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.121496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.121506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.121656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.121943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.121954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.122308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.122607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.122619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 17:44:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.814 [2024-10-13 17:44:04.122907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 17:44:04 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:55.814 [2024-10-13 17:44:04.123232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.814 [2024-10-13 17:44:04.123244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.814 qpair failed and we were unable to recover it. 00:33:55.814 [2024-10-13 17:44:04.123289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 17:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.815 [2024-10-13 17:44:04.123459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.123469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.815 [2024-10-13 17:44:04.123645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.123822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.123833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.124003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.124048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.124057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.124116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.124346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.124355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.124407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.124758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.124769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.124958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.125263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.125274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.125652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.125819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.125829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.126141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.126559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.126569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.126875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.127218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.127230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.127566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.127726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.127736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.127961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.128260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.128270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.128578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.128903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.128913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.129214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.129528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.129537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.129828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.130157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.130167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.130507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.130835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.130846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.131125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.131449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.131459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.131815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.132021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.132031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.132386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.132551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.132562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.132899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.133084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.133094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.133402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.133577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.133586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.133988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.134300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.134309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.134610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.134903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.134913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.135228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.135521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.135531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.135900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.136228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.136239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.136537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.136840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.136849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.137197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.137494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.137504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.815 qpair failed and we were unable to recover it. 00:33:55.815 [2024-10-13 17:44:04.137817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.815 [2024-10-13 17:44:04.138130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.138141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.138351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.138618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.138627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.139053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.139434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.139444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.139641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 Malloc0 00:33:55.816 [2024-10-13 17:44:04.139837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.139846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.140218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 17:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.816 [2024-10-13 17:44:04.140543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.140553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.140793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 17:44:04 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:55.816 [2024-10-13 17:44:04.141084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.141094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 17:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.816 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.816 [2024-10-13 17:44:04.141426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.141590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.141600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.141812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.142045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.142054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.142303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.142640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.142649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.142965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.143250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.143260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.143473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.143813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.143822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.144016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.144259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.144269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.144547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.144894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.144904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.145205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.145593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.145603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.145890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.146203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.146213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.146515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.146834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.146845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.147170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.147341] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.816 [2024-10-13 17:44:04.147525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.147538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.147846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.148181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.148192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.148517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.148823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.148834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.149134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.149468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.149478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.149777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.150153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.150163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.150212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.150561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.150570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.150875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.151124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.151360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.151731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.816 [2024-10-13 17:44:04.151957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.816 qpair failed and we were unable to recover it. 00:33:55.816 [2024-10-13 17:44:04.152179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.152526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.152535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.152827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.153035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.153045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.153376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.153677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.153687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.153882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.154080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.154090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.154287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.154584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.154593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.154830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.155015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.155025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.155207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.155397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.155406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.155630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.155810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.155820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.156003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.156209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.156219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 17:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.817 [2024-10-13 17:44:04.156532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 17:44:04 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:55.817 [2024-10-13 17:44:04.156826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.156836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 17:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.817 [2024-10-13 17:44:04.157165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.817 [2024-10-13 17:44:04.157511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.157521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.157685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.158028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.158038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.158349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.158649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.158659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.158983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.159270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.159280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.159662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.159832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.159842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.160188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.160529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.160538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.160709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.161060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.161079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.161336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.161663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.161672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.162011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.162329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.162339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.162657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.162974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.162984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.163281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.163607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.163630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.163919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.164242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.164251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.164564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.164867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.164877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.165229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.165544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.165555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.165864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.166182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.166192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.166355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.166632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.817 [2024-10-13 17:44:04.166642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.817 qpair failed and we were unable to recover it. 00:33:55.817 [2024-10-13 17:44:04.166783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.166946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.166957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.167120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.167428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.167437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.167755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.167965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.167976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.168346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 17:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.818 [2024-10-13 17:44:04.168534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.168543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.168723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 17:44:04 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:55.818 [2024-10-13 17:44:04.169059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.169075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 17:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.818 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.818 [2024-10-13 17:44:04.169388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.169724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.169735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.169936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.170250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.170260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.170599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.170763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.170774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.171117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.171438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.171449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.171678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.171851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.171862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.172180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.172452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.172461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.172752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.173059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.173073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.173377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.173597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.173607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.173947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.174277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.174287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.174473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.174705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.174715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.175038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.175270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.175280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.175321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.175685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.175695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.176020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.176365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.176375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.176570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.176774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.176783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.177153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.177485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.177494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.177670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.177845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.177854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.177902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.178237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.178247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.818 [2024-10-13 17:44:04.178534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.178835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.818 [2024-10-13 17:44:04.178845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.818 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.179134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.179330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.179338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.179664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.179901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.179911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.180105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.180402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.180412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 17:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.819 [2024-10-13 17:44:04.180621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 17:44:04 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.819 [2024-10-13 17:44:04.180915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.180925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.181125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 17:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.819 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.819 [2024-10-13 17:44:04.181487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.181496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.181851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.182214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.182224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.182538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.182845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.182856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.183144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.183542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.183551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.183856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.184254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.184265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.184589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.184957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.184968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.185329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.185655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.185665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.186002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.186261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.186270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.186480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.186756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.186765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.186946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.187238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.187248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e960 with addr=10.0.0.2, port=4420 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.187590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.819 [2024-10-13 17:44:04.187631] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.819 [2024-10-13 17:44:04.190082] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:33:55.819 [2024-10-13 17:44:04.190128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e960 (107): Transport endpoint is not connected 00:33:55.819 [2024-10-13 17:44:04.190173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 17:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.819 17:44:04 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:55.819 17:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.819 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:55.819 [2024-10-13 17:44:04.198309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.819 [2024-10-13 17:44:04.198387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.819 [2024-10-13 17:44:04.198406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.819 [2024-10-13 17:44:04.198414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.819 [2024-10-13 17:44:04.198421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.819 [2024-10-13 17:44:04.198437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 17:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.819 17:44:04 -- host/target_disconnect.sh@58 -- # wait 3421376 00:33:55.819 [2024-10-13 17:44:04.208182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.819 [2024-10-13 17:44:04.208290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.819 [2024-10-13 17:44:04.208306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.819 [2024-10-13 17:44:04.208313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.819 [2024-10-13 17:44:04.208320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.819 [2024-10-13 17:44:04.208339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.217982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.819 [2024-10-13 17:44:04.218047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.819 [2024-10-13 17:44:04.218067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.819 [2024-10-13 17:44:04.218075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.819 [2024-10-13 17:44:04.218082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.819 [2024-10-13 17:44:04.218096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.228099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.819 [2024-10-13 17:44:04.228196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.819 [2024-10-13 17:44:04.228211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.819 [2024-10-13 17:44:04.228218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.819 [2024-10-13 17:44:04.228227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.819 [2024-10-13 17:44:04.228243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.819 qpair failed and we were unable to recover it. 00:33:55.819 [2024-10-13 17:44:04.238171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.819 [2024-10-13 17:44:04.238225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.238240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.238246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.238252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.238266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.248176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.248264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.248278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.248285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.248291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.248305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.258162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.258217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.258234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.258240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.258247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.258260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.268254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.268308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.268322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.268329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.268336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.268349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.278251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.278302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.278315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.278323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.278330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.278344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.288291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.288341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.288354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.288360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.288367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.288380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.298181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.298238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.298252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.298259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.298265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.298282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.308337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.308396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.308409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.308416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.308422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.308435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:55.820 [2024-10-13 17:44:04.318369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.820 [2024-10-13 17:44:04.318428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.820 [2024-10-13 17:44:04.318441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.820 [2024-10-13 17:44:04.318448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.820 [2024-10-13 17:44:04.318455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:55.820 [2024-10-13 17:44:04.318468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.820 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.328390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.328439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.328452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.328459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.328465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.328478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.338409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.338463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.338478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.338485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.338491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.338505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.348317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.348380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.348397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.348404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.348410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.348423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.358444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.358497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.358511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.358517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.358524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.358536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.368498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.368547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.368561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.368568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.368574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.368587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.378537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.378589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.378602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.378609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.378615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.378629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.388545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.388601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.388614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.388621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.388627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.388643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.398574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.398627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.398640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.398647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.398653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.398666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.408624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.408680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.408693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.408699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.408705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.408718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.418618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.418669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.418683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.418690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.418696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.418709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.428677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.428732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.428745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.428751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.428758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.428770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.438669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.084 [2024-10-13 17:44:04.438731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.084 [2024-10-13 17:44:04.438760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.084 [2024-10-13 17:44:04.438769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.084 [2024-10-13 17:44:04.438776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.084 [2024-10-13 17:44:04.438795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.084 qpair failed and we were unable to recover it. 00:33:56.084 [2024-10-13 17:44:04.448605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.448664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.448680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.448687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.448693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.448708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.458829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.458891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.458905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.458912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.458918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.458931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.468829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.468889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.468915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.468923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.468930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.468949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.478809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.478858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.478874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.478881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.478892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.478907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.488836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.488887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.488901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.488908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.488914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.488927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.498848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.498900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.498913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.498920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.498926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.498940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.508759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.508814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.508827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.508833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.508840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.508853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.518931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.518992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.519005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.519012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.519018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.519032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.528919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.528977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.528991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.528998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.529005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.529018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.538984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.539036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.539049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.539056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.539068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.539083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.549028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.549084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.549097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.549104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.549110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.549124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.559029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.559111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.559124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.559130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.559137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.559150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.085 [2024-10-13 17:44:04.569111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.085 [2024-10-13 17:44:04.569165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.085 [2024-10-13 17:44:04.569179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.085 [2024-10-13 17:44:04.569186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.085 [2024-10-13 17:44:04.569196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.085 [2024-10-13 17:44:04.569209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.085 qpair failed and we were unable to recover it. 00:33:56.086 [2024-10-13 17:44:04.579096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.086 [2024-10-13 17:44:04.579148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.086 [2024-10-13 17:44:04.579161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.086 [2024-10-13 17:44:04.579168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.086 [2024-10-13 17:44:04.579174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.086 [2024-10-13 17:44:04.579187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.086 qpair failed and we were unable to recover it. 00:33:56.086 [2024-10-13 17:44:04.589129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.086 [2024-10-13 17:44:04.589183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.086 [2024-10-13 17:44:04.589195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.086 [2024-10-13 17:44:04.589202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.086 [2024-10-13 17:44:04.589209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.086 [2024-10-13 17:44:04.589222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.086 qpair failed and we were unable to recover it. 00:33:56.086 [2024-10-13 17:44:04.599149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.086 [2024-10-13 17:44:04.599201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.086 [2024-10-13 17:44:04.599215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.086 [2024-10-13 17:44:04.599222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.086 [2024-10-13 17:44:04.599228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.086 [2024-10-13 17:44:04.599241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.086 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.609174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.609225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.609238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.609245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.609251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.609264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.619089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.619143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.619157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.619164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.619170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.619183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.629157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.629226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.629239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.629246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.629252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.629265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.639271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.639333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.639346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.639353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.639359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.639372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.649342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.649418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.649431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.649438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.649444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.649458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.659281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.659383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.659397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.659404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.659414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.659427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.669345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.669403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.669416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.669423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.669429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.669443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.679382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.679437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.679452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.679459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.679465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.679478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.689398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.689446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.689458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.689465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.689471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.689485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.699360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.699415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.699428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.699435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.699441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.699454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.349 [2024-10-13 17:44:04.709461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.349 [2024-10-13 17:44:04.709525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.349 [2024-10-13 17:44:04.709538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.349 [2024-10-13 17:44:04.709544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.349 [2024-10-13 17:44:04.709551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.349 [2024-10-13 17:44:04.709564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.349 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.719485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.719533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.719548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.719555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.719561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.719574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.729518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.729613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.729626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.729633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.729639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.729652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.739549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.739605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.739618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.739625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.739631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.739644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.749618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.749675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.749689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.749696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.749705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.749719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.759611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.759701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.759714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.759721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.759727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.759741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.769643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.769697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.769710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.769717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.769723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.769736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.779664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.779719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.779733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.779739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.779746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.779760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.789666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.789723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.789737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.789744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.789750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.789763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.799728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.799781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.799795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.799801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.799808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.799821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.809762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.809817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.809842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.809850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.809856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.809875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.819754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.819816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.819841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.819849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.819856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.819875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.350 [2024-10-13 17:44:04.829767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.350 [2024-10-13 17:44:04.829829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.350 [2024-10-13 17:44:04.829854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.350 [2024-10-13 17:44:04.829862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.350 [2024-10-13 17:44:04.829869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.350 [2024-10-13 17:44:04.829888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.350 qpair failed and we were unable to recover it. 00:33:56.351 [2024-10-13 17:44:04.839716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.351 [2024-10-13 17:44:04.839770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.351 [2024-10-13 17:44:04.839785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.351 [2024-10-13 17:44:04.839796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.351 [2024-10-13 17:44:04.839803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.351 [2024-10-13 17:44:04.839817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.351 qpair failed and we were unable to recover it. 00:33:56.351 [2024-10-13 17:44:04.849746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.351 [2024-10-13 17:44:04.849801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.351 [2024-10-13 17:44:04.849815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.351 [2024-10-13 17:44:04.849822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.351 [2024-10-13 17:44:04.849828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.351 [2024-10-13 17:44:04.849842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.351 qpair failed and we were unable to recover it. 00:33:56.351 [2024-10-13 17:44:04.859953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.351 [2024-10-13 17:44:04.860007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.351 [2024-10-13 17:44:04.860020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.351 [2024-10-13 17:44:04.860027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.351 [2024-10-13 17:44:04.860033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.351 [2024-10-13 17:44:04.860046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.351 qpair failed and we were unable to recover it. 00:33:56.351 [2024-10-13 17:44:04.869925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.351 [2024-10-13 17:44:04.869982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.351 [2024-10-13 17:44:04.869996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.351 [2024-10-13 17:44:04.870002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.351 [2024-10-13 17:44:04.870009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.351 [2024-10-13 17:44:04.870022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.351 qpair failed and we were unable to recover it. 00:33:56.613 [2024-10-13 17:44:04.879968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.613 [2024-10-13 17:44:04.880018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.613 [2024-10-13 17:44:04.880031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.613 [2024-10-13 17:44:04.880038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.613 [2024-10-13 17:44:04.880044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.613 [2024-10-13 17:44:04.880058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.613 qpair failed and we were unable to recover it. 00:33:56.613 [2024-10-13 17:44:04.889972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.613 [2024-10-13 17:44:04.890022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.613 [2024-10-13 17:44:04.890036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.613 [2024-10-13 17:44:04.890042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.613 [2024-10-13 17:44:04.890049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.613 [2024-10-13 17:44:04.890067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.613 qpair failed and we were unable to recover it. 00:33:56.613 [2024-10-13 17:44:04.900017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.613 [2024-10-13 17:44:04.900071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.613 [2024-10-13 17:44:04.900085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.613 [2024-10-13 17:44:04.900092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.613 [2024-10-13 17:44:04.900098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.613 [2024-10-13 17:44:04.900111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.613 qpair failed and we were unable to recover it. 00:33:56.613 [2024-10-13 17:44:04.910073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.613 [2024-10-13 17:44:04.910151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.613 [2024-10-13 17:44:04.910165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.613 [2024-10-13 17:44:04.910172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.613 [2024-10-13 17:44:04.910178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.613 [2024-10-13 17:44:04.910192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.613 qpair failed and we were unable to recover it. 00:33:56.613 [2024-10-13 17:44:04.919978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.613 [2024-10-13 17:44:04.920029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.613 [2024-10-13 17:44:04.920045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.920052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.920058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.920077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.930107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.930160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.930173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.930184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.930190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.930204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.940042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.940097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.940111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.940117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.940124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.940137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.950158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.950211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.950225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.950232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.950238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.950251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.960163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.960221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.960234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.960240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.960247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.960260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.970238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.970285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.970299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.970306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.970312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.970325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.980230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.980285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.980298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.980304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.980311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.980324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:04.990274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:04.990327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:04.990340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:04.990347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:04.990353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:04.990366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:05.000285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:05.000345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:05.000359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:05.000365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:05.000372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:05.000385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:05.010313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:05.010362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:05.010375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:05.010382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:05.010388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:05.010401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:05.020358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:05.020411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:05.020424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:05.020435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:05.020441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:05.020454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:05.030391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:05.030449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:05.030462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:05.030468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:05.030475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:05.030488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:05.040295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:05.040350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.614 [2024-10-13 17:44:05.040362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.614 [2024-10-13 17:44:05.040369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.614 [2024-10-13 17:44:05.040375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.614 [2024-10-13 17:44:05.040388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.614 qpair failed and we were unable to recover it. 00:33:56.614 [2024-10-13 17:44:05.050454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.614 [2024-10-13 17:44:05.050505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.050518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.050525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.050531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.050544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.060452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.060504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.060516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.060523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.060529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.060542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.070490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.070556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.070570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.070577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.070583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.070596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.080532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.080587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.080599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.080606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.080612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.080625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.090566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.090624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.090637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.090644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.090651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.090663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.100566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.100621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.100634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.100641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.100647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.100661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.110620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.110674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.110687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.110697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.110703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.110716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.120647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.120707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.120720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.120727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.120733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.120746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.615 [2024-10-13 17:44:05.130678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.615 [2024-10-13 17:44:05.130728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.615 [2024-10-13 17:44:05.130741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.615 [2024-10-13 17:44:05.130748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.615 [2024-10-13 17:44:05.130754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.615 [2024-10-13 17:44:05.130767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.615 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.140576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.140627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.140640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.140646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.140653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.140666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.150739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.150789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.150803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.150810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.150816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.150829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.160645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.160693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.160708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.160715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.160721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.160735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.170776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.170835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.170850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.170857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.170863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.170877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.180700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.180768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.180781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.180788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.180795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.180808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.190853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.190915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.190940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.190948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.190955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.190974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.200878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.200934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.200965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.200973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.200980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.200999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.210902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.210952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.210967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.210974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.210981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.210996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.220809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.220862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.220877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.220884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.220890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.220904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.230932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.230996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.231009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.878 [2024-10-13 17:44:05.231016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.878 [2024-10-13 17:44:05.231022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.878 [2024-10-13 17:44:05.231036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.878 qpair failed and we were unable to recover it. 00:33:56.878 [2024-10-13 17:44:05.241029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.878 [2024-10-13 17:44:05.241081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.878 [2024-10-13 17:44:05.241095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.241102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.241108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.241122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.250932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.250987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.251001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.251007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.251013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.251027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.261040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.261098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.261111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.261118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.261124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.261138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.271065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.271133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.271147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.271156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.271163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.271176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.281065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.281118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.281131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.281138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.281145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.281159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.291117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.291174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.291190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.291197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.291204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.291217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.301193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.301250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.301262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.301269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.301275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.301289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.311219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.311291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.311304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.311310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.311317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.311330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.321198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.321251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.321265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.321271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.321278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.321291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.331238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.331318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.331331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.331338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.331344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.331361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.341280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.341332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.341346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.341353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.341359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.341373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.351304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.351357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.351370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.351377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.351383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.351396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.361320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.361368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.879 [2024-10-13 17:44:05.361381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.879 [2024-10-13 17:44:05.361388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.879 [2024-10-13 17:44:05.361394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.879 [2024-10-13 17:44:05.361407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.879 qpair failed and we were unable to recover it. 00:33:56.879 [2024-10-13 17:44:05.371347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.879 [2024-10-13 17:44:05.371404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.880 [2024-10-13 17:44:05.371418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.880 [2024-10-13 17:44:05.371424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.880 [2024-10-13 17:44:05.371430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.880 [2024-10-13 17:44:05.371443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.880 qpair failed and we were unable to recover it. 00:33:56.880 [2024-10-13 17:44:05.381399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.880 [2024-10-13 17:44:05.381457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.880 [2024-10-13 17:44:05.381473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.880 [2024-10-13 17:44:05.381480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.880 [2024-10-13 17:44:05.381486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.880 [2024-10-13 17:44:05.381499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.880 qpair failed and we were unable to recover it. 00:33:56.880 [2024-10-13 17:44:05.391302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.880 [2024-10-13 17:44:05.391361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.880 [2024-10-13 17:44:05.391374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.880 [2024-10-13 17:44:05.391381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.880 [2024-10-13 17:44:05.391387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:56.880 [2024-10-13 17:44:05.391400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.880 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.401434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.401523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.401537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.401544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.401550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.401564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.411462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.411514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.411527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.411534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.411540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.411553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.421476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.421529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.421544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.421551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.421560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.421577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.431549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.431608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.431621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.431628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.431634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.431648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.441517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.441565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.441578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.441584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.441591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.441604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.451479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.451532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.451546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.451555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.451563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.451579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.461459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.461524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.461539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.461546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.461552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.461566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.143 [2024-10-13 17:44:05.471614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.143 [2024-10-13 17:44:05.471669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.143 [2024-10-13 17:44:05.471693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.143 [2024-10-13 17:44:05.471701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.143 [2024-10-13 17:44:05.471707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.143 [2024-10-13 17:44:05.471721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.143 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.481636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.481691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.481704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.481711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.481717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.481730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.491720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.491798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.491811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.491818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.491826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.491839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.501710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.501764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.501777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.501784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.501790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.501803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.511748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.511803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.511817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.511823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.511830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.511847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.521783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.521835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.521849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.521856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.521863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.521876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.531672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.531762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.531777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.531784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.531790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.531803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.541832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.541890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.541904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.541911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.541917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.541930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.551731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.551789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.551803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.551809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.551816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.551829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.561880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.561933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.561949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.561956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.561962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.561976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.571899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.571953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.571967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.571973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.571979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.571992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.581909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.581974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.581987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.581994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.582000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.582013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.591993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.592067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.592080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.144 [2024-10-13 17:44:05.592087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.144 [2024-10-13 17:44:05.592094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.144 [2024-10-13 17:44:05.592107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.144 qpair failed and we were unable to recover it. 00:33:57.144 [2024-10-13 17:44:05.602038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.144 [2024-10-13 17:44:05.602099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.144 [2024-10-13 17:44:05.602113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.602119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.602125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.602142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.145 [2024-10-13 17:44:05.612036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.145 [2024-10-13 17:44:05.612089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.145 [2024-10-13 17:44:05.612103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.612109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.612116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.612129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.145 [2024-10-13 17:44:05.622033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.145 [2024-10-13 17:44:05.622083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.145 [2024-10-13 17:44:05.622096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.622103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.622109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.622122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.145 [2024-10-13 17:44:05.632090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.145 [2024-10-13 17:44:05.632177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.145 [2024-10-13 17:44:05.632190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.632197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.632203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.632217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.145 [2024-10-13 17:44:05.642106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.145 [2024-10-13 17:44:05.642151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.145 [2024-10-13 17:44:05.642165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.642172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.642178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.642191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.145 [2024-10-13 17:44:05.652003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.145 [2024-10-13 17:44:05.652054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.145 [2024-10-13 17:44:05.652075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.652082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.652088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.652102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.145 [2024-10-13 17:44:05.662178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.145 [2024-10-13 17:44:05.662229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.145 [2024-10-13 17:44:05.662242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.145 [2024-10-13 17:44:05.662249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.145 [2024-10-13 17:44:05.662255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.145 [2024-10-13 17:44:05.662269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.145 qpair failed and we were unable to recover it. 00:33:57.408 [2024-10-13 17:44:05.672203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.408 [2024-10-13 17:44:05.672253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.408 [2024-10-13 17:44:05.672266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.408 [2024-10-13 17:44:05.672273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.408 [2024-10-13 17:44:05.672279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.408 [2024-10-13 17:44:05.672292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.408 qpair failed and we were unable to recover it. 00:33:57.408 [2024-10-13 17:44:05.682217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.408 [2024-10-13 17:44:05.682312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.408 [2024-10-13 17:44:05.682325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.408 [2024-10-13 17:44:05.682332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.408 [2024-10-13 17:44:05.682338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.408 [2024-10-13 17:44:05.682352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.408 qpair failed and we were unable to recover it. 00:33:57.408 [2024-10-13 17:44:05.692263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.408 [2024-10-13 17:44:05.692314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.408 [2024-10-13 17:44:05.692326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.408 [2024-10-13 17:44:05.692333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.408 [2024-10-13 17:44:05.692343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.408 [2024-10-13 17:44:05.692356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.408 qpair failed and we were unable to recover it. 00:33:57.408 [2024-10-13 17:44:05.702280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.408 [2024-10-13 17:44:05.702333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.408 [2024-10-13 17:44:05.702346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.702353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.702359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.702373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.712322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.712377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.712391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.712398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.712404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.712417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.722345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.722400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.722414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.722420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.722427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.722440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.732350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.732434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.732449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.732456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.732463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.732480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.742414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.742478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.742492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.742498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.742505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.742518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.752433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.752496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.752510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.752517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.752524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.752537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.762459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.762510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.762522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.762529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.762535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.762548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.772469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.772568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.772582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.772588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.772595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.772608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.782516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.782598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.782611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.782618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.782627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.782641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.792407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.792467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.792480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.792486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.792492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.792505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.802569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.802658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.802670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.802677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.802683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.802696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.812567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.812626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.812641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.812648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.812654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.812667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.822631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.822710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.409 [2024-10-13 17:44:05.822723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.409 [2024-10-13 17:44:05.822730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.409 [2024-10-13 17:44:05.822736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.409 [2024-10-13 17:44:05.822749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.409 qpair failed and we were unable to recover it. 00:33:57.409 [2024-10-13 17:44:05.832659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.409 [2024-10-13 17:44:05.832731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.832745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.832751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.832758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.832771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.842671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.842733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.842746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.842753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.842759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.842772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.852614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.852690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.852703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.852710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.852716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.852729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.862612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.862665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.862677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.862684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.862690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.862703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.872774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.872827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.872840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.872847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.872856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.872869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.882788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.882844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.882868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.882876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.882883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.882902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.892816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.892874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.892899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.892907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.892914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.892933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.902846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.902946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.902962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.902969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.902975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.902990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.912868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.912927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.912941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.912948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.912954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.912968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.410 [2024-10-13 17:44:05.922892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.410 [2024-10-13 17:44:05.922943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.410 [2024-10-13 17:44:05.922957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.410 [2024-10-13 17:44:05.922964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.410 [2024-10-13 17:44:05.922970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.410 [2024-10-13 17:44:05.922983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.410 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.932904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.932954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.932967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.932974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.932980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.932994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.942987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.943035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.943049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.943055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.943075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.943089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.952981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.953038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.953052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.953058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.953070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.953083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.963029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.963085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.963098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.963104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.963115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.963129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.973057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.973125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.973139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.973145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.973152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.973165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.983109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.983190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.983203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.983210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.983216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.983229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:05.993134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:05.993186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:05.993199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:05.993205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:05.993212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:05.993225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:06.003150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:06.003195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:06.003208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:06.003215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:06.003221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:06.003235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:06.013205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:06.013267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:06.013280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:06.013287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:06.013293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:06.013306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:06.023219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:06.023272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:06.023285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:06.023292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:06.023298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:06.023311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.674 [2024-10-13 17:44:06.033228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.674 [2024-10-13 17:44:06.033283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.674 [2024-10-13 17:44:06.033297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.674 [2024-10-13 17:44:06.033304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.674 [2024-10-13 17:44:06.033310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.674 [2024-10-13 17:44:06.033323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.674 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.043135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.043192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.043205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.043212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.043218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.043231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.053284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.053338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.053351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.053361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.053368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.053382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.063291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.063344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.063357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.063364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.063370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.063384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.073362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.073423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.073437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.073444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.073450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.073463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.083340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.083393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.083406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.083413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.083420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.083433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.093382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.093433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.093446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.093453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.093459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.093472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.103346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.103411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.103424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.103431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.103437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.103451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.113452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.113508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.113521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.113528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.113534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.113547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.123486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.123537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.123552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.123559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.123565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.123578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.133429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.133482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.133495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.133502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.133508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.133521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.143542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.143597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.143610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.143620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.143627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.143640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.153574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.153635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.153649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.153656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.675 [2024-10-13 17:44:06.153662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.675 [2024-10-13 17:44:06.153676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.675 qpair failed and we were unable to recover it. 00:33:57.675 [2024-10-13 17:44:06.163482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.675 [2024-10-13 17:44:06.163532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.675 [2024-10-13 17:44:06.163546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.675 [2024-10-13 17:44:06.163553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.676 [2024-10-13 17:44:06.163560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.676 [2024-10-13 17:44:06.163574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.676 qpair failed and we were unable to recover it. 00:33:57.676 [2024-10-13 17:44:06.173623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.676 [2024-10-13 17:44:06.173677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.676 [2024-10-13 17:44:06.173691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.676 [2024-10-13 17:44:06.173698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.676 [2024-10-13 17:44:06.173704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.676 [2024-10-13 17:44:06.173717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.676 qpair failed and we were unable to recover it. 00:33:57.676 [2024-10-13 17:44:06.183661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.676 [2024-10-13 17:44:06.183712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.676 [2024-10-13 17:44:06.183725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.676 [2024-10-13 17:44:06.183732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.676 [2024-10-13 17:44:06.183739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.676 [2024-10-13 17:44:06.183752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.676 qpair failed and we were unable to recover it. 00:33:57.676 [2024-10-13 17:44:06.193581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.676 [2024-10-13 17:44:06.193681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.676 [2024-10-13 17:44:06.193695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.676 [2024-10-13 17:44:06.193702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.676 [2024-10-13 17:44:06.193708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.676 [2024-10-13 17:44:06.193721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.676 qpair failed and we were unable to recover it. 00:33:57.938 [2024-10-13 17:44:06.203706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.938 [2024-10-13 17:44:06.203759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.938 [2024-10-13 17:44:06.203772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.938 [2024-10-13 17:44:06.203779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.938 [2024-10-13 17:44:06.203785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.938 [2024-10-13 17:44:06.203799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.938 qpair failed and we were unable to recover it. 00:33:57.938 [2024-10-13 17:44:06.213760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.938 [2024-10-13 17:44:06.213813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.938 [2024-10-13 17:44:06.213825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.938 [2024-10-13 17:44:06.213832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.938 [2024-10-13 17:44:06.213839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.938 [2024-10-13 17:44:06.213852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.938 qpair failed and we were unable to recover it. 00:33:57.938 [2024-10-13 17:44:06.223655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.938 [2024-10-13 17:44:06.223707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.938 [2024-10-13 17:44:06.223721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.938 [2024-10-13 17:44:06.223727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.938 [2024-10-13 17:44:06.223734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.938 [2024-10-13 17:44:06.223747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.938 qpair failed and we were unable to recover it. 00:33:57.938 [2024-10-13 17:44:06.233806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.938 [2024-10-13 17:44:06.233862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.938 [2024-10-13 17:44:06.233875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.938 [2024-10-13 17:44:06.233885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.938 [2024-10-13 17:44:06.233891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.938 [2024-10-13 17:44:06.233904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.938 qpair failed and we were unable to recover it. 00:33:57.938 [2024-10-13 17:44:06.243803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.243849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.243862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.243869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.243875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.243888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.253853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.253950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.253963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.253969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.253976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.253989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.263905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.263988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.264000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.264007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.264013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.264027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.273905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.273987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.274001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.274007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.274013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.274026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.283915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.283971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.283984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.283991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.283998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.284011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.293968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.294047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.294060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.294072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.294079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.294093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.304003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.304053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.304071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.304078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.304084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.304098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.313958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.314029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.314042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.314049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.314055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.314074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.324050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.324105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.324119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.324129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.324135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.324149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.334078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.334135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.334148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.334155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.334161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.334174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.344134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.344189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.344204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.344211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.344217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.344232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.354125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.354209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.354222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.354229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.354235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.939 [2024-10-13 17:44:06.354249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.939 qpair failed and we were unable to recover it. 00:33:57.939 [2024-10-13 17:44:06.364056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.939 [2024-10-13 17:44:06.364114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.939 [2024-10-13 17:44:06.364127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.939 [2024-10-13 17:44:06.364134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.939 [2024-10-13 17:44:06.364140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.364154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.374212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.374294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.374308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.374315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.374321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.374335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.384214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.384281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.384295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.384301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.384308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.384322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.394257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.394320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.394333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.394339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.394346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.394359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.404252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.404302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.404316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.404322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.404328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.404342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.414303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.414353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.414369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.414376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.414383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.414396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.424209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.424260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.424274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.424280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.424286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.424299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.434381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.434437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.434450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.434456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.434463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.434475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.444405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.444500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.444513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.444520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.444526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.444540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:57.940 [2024-10-13 17:44:06.454401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.940 [2024-10-13 17:44:06.454452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.940 [2024-10-13 17:44:06.454465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.940 [2024-10-13 17:44:06.454471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.940 [2024-10-13 17:44:06.454477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:57.940 [2024-10-13 17:44:06.454491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.940 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.464545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.464606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.464619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.464626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.464632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.464645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.474414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.474471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.474485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.474491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.474497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.474510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.484530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.484586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.484599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.484605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.484612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.484625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.494562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.494621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.494634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.494641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.494647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.494660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.504550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.504604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.504620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.504627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.504634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.504646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.514556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.514641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.514654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.514660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.514666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.514679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.524610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.524656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.524669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.524676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.524682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.524695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.534635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.534687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.534701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.534707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.203 [2024-10-13 17:44:06.534713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.203 [2024-10-13 17:44:06.534726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.203 qpair failed and we were unable to recover it. 00:33:58.203 [2024-10-13 17:44:06.544553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.203 [2024-10-13 17:44:06.544607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.203 [2024-10-13 17:44:06.544621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.203 [2024-10-13 17:44:06.544627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.544634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.544650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.554723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.554781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.554793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.554800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.554806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.554819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.564730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.564783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.564808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.564816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.564822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.564841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.574740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.574830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.574855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.574864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.574870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.574890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.584709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.584773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.584798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.584806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.584812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.584831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.594821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.594873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.594893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.594900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.594907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.594921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.604870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.604951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.604966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.604972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.604979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.604992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.614728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.614783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.614797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.614803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.614810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.614823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.624896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.624950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.624964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.624970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.624977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.624991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.634927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.634995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.635008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.635015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.635021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.635038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.644818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.644879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.644895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.644901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.644908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.644922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.654968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.655017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.655031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.655038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.655044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.655057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.665019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.665077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.665090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.204 [2024-10-13 17:44:06.665097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.204 [2024-10-13 17:44:06.665103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.204 [2024-10-13 17:44:06.665117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.204 qpair failed and we were unable to recover it. 00:33:58.204 [2024-10-13 17:44:06.675032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.204 [2024-10-13 17:44:06.675096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.204 [2024-10-13 17:44:06.675110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.205 [2024-10-13 17:44:06.675117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.205 [2024-10-13 17:44:06.675123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.205 [2024-10-13 17:44:06.675137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.205 qpair failed and we were unable to recover it. 00:33:58.205 [2024-10-13 17:44:06.685056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.205 [2024-10-13 17:44:06.685103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.205 [2024-10-13 17:44:06.685120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.205 [2024-10-13 17:44:06.685126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.205 [2024-10-13 17:44:06.685133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.205 [2024-10-13 17:44:06.685146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.205 qpair failed and we were unable to recover it. 00:33:58.205 [2024-10-13 17:44:06.695050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.205 [2024-10-13 17:44:06.695101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.205 [2024-10-13 17:44:06.695114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.205 [2024-10-13 17:44:06.695121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.205 [2024-10-13 17:44:06.695127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.205 [2024-10-13 17:44:06.695141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.205 qpair failed and we were unable to recover it. 00:33:58.205 [2024-10-13 17:44:06.705120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.205 [2024-10-13 17:44:06.705194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.205 [2024-10-13 17:44:06.705208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.205 [2024-10-13 17:44:06.705215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.205 [2024-10-13 17:44:06.705221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.205 [2024-10-13 17:44:06.705239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.205 qpair failed and we were unable to recover it. 00:33:58.205 [2024-10-13 17:44:06.715101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.205 [2024-10-13 17:44:06.715155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.205 [2024-10-13 17:44:06.715169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.205 [2024-10-13 17:44:06.715176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.205 [2024-10-13 17:44:06.715182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.205 [2024-10-13 17:44:06.715195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.205 qpair failed and we were unable to recover it. 00:33:58.205 [2024-10-13 17:44:06.725038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.205 [2024-10-13 17:44:06.725113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.205 [2024-10-13 17:44:06.725127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.205 [2024-10-13 17:44:06.725134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.205 [2024-10-13 17:44:06.725140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.205 [2024-10-13 17:44:06.725157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.205 qpair failed and we were unable to recover it. 00:33:58.467 [2024-10-13 17:44:06.735145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.467 [2024-10-13 17:44:06.735192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.467 [2024-10-13 17:44:06.735206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.467 [2024-10-13 17:44:06.735213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.467 [2024-10-13 17:44:06.735219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.467 [2024-10-13 17:44:06.735233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.467 qpair failed and we were unable to recover it. 00:33:58.467 [2024-10-13 17:44:06.745211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.467 [2024-10-13 17:44:06.745261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.467 [2024-10-13 17:44:06.745274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.467 [2024-10-13 17:44:06.745281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.467 [2024-10-13 17:44:06.745287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.467 [2024-10-13 17:44:06.745300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.467 qpair failed and we were unable to recover it. 00:33:58.467 [2024-10-13 17:44:06.755227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.467 [2024-10-13 17:44:06.755283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.467 [2024-10-13 17:44:06.755296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.467 [2024-10-13 17:44:06.755303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.467 [2024-10-13 17:44:06.755309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.467 [2024-10-13 17:44:06.755322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.765260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.765304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.765318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.765325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.765331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.765344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.775260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.775301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.775318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.775325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.775331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.775345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.785359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.785451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.785465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.785472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.785479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.785492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.795310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.795362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.795374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.795381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.795387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.795400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.805357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.805408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.805421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.805428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.805434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.805447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.815367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.815413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.815426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.815433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.815439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.815455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.825453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.825505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.825519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.825526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.825532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.825545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.835424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.835481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.835494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.835500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.835507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.835520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.845447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.845504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.845517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.845524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.845530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.845544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.855460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.855506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.855519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.855526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.855532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.855545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.865560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.865612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.865628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.865635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.865641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.865655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.875408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.875461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.875474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.875481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.468 [2024-10-13 17:44:06.875487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.468 [2024-10-13 17:44:06.875501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.468 qpair failed and we were unable to recover it. 00:33:58.468 [2024-10-13 17:44:06.885605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.468 [2024-10-13 17:44:06.885684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.468 [2024-10-13 17:44:06.885697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.468 [2024-10-13 17:44:06.885704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.885710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.885723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.895595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.895639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.895651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.895658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.895664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.895677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.905595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.905647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.905661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.905668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.905677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.905691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.915572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.915627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.915640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.915647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.915653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.915666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.925640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.925694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.925707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.925714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.925720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.925734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.935725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.935772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.935785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.935792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.935798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.935811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.945655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.945707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.945719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.945726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.945732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.945746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.955768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.955828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.955842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.955849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.955855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.955872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.965801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.965854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.965868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.965875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.965881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.965894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.975850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.975896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.975910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.975916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.975923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.975936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.469 [2024-10-13 17:44:06.985924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.469 [2024-10-13 17:44:06.986028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.469 [2024-10-13 17:44:06.986041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.469 [2024-10-13 17:44:06.986048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.469 [2024-10-13 17:44:06.986055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.469 [2024-10-13 17:44:06.986073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.469 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:06.995840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:06.995935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:06.995948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:06.995954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:06.995968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:06.995981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.005910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.005955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.005969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.005976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.005982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.005995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.015953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.016002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.016014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.016021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.016027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.016041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.026020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.026078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.026092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.026099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.026105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.026118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.035997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.036085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.036098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.036105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.036112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.036125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.045984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.046035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.046049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.046055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.046066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.046080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.056056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.056107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.056120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.056126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.056133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.056146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.066158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.066231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.066244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.066251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.066257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.066270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.076124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.076220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.076234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.076241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.076248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.076263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.086155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.086230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.732 [2024-10-13 17:44:07.086243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.732 [2024-10-13 17:44:07.086250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.732 [2024-10-13 17:44:07.086260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.732 [2024-10-13 17:44:07.086273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.732 qpair failed and we were unable to recover it. 00:33:58.732 [2024-10-13 17:44:07.096178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.732 [2024-10-13 17:44:07.096224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.096237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.096244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.096250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.096264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.106159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.106210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.106223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.106230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.106236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.106249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.116238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.116286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.116300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.116306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.116312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.116326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.126253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.126301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.126315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.126321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.126328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.126341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.136317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.136365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.136378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.136385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.136391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.136404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.146335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.146396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.146410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.146417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.146423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.146436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.156342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.156395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.156408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.156415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.156421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.156435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.166361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.166433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.166446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.166452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.166458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.166471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.176384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.176462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.176475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.176482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.176491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.176504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.186347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.186399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.186414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.186420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.186426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.186441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.196466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.196521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.196534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.196541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.196547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.196560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.206339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.206381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.206394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.206401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.206407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.206421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.216487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.216536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.733 [2024-10-13 17:44:07.216549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.733 [2024-10-13 17:44:07.216555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.733 [2024-10-13 17:44:07.216562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.733 [2024-10-13 17:44:07.216575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.733 qpair failed and we were unable to recover it. 00:33:58.733 [2024-10-13 17:44:07.226500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.733 [2024-10-13 17:44:07.226583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.734 [2024-10-13 17:44:07.226596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.734 [2024-10-13 17:44:07.226603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.734 [2024-10-13 17:44:07.226610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.734 [2024-10-13 17:44:07.226623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.734 qpair failed and we were unable to recover it. 00:33:58.734 [2024-10-13 17:44:07.236569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.734 [2024-10-13 17:44:07.236622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.734 [2024-10-13 17:44:07.236635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.734 [2024-10-13 17:44:07.236642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.734 [2024-10-13 17:44:07.236648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.734 [2024-10-13 17:44:07.236662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.734 qpair failed and we were unable to recover it. 00:33:58.734 [2024-10-13 17:44:07.246596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.734 [2024-10-13 17:44:07.246644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.734 [2024-10-13 17:44:07.246657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.734 [2024-10-13 17:44:07.246664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.734 [2024-10-13 17:44:07.246670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.734 [2024-10-13 17:44:07.246683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.734 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.256627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.256671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.256684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.256692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.256698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.256711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.266697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.266751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.266764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.266774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.266781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.266794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.276691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.276741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.276755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.276763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.276769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.276783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.286707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.286752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.286766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.286773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.286780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.286794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.296729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.296780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.296806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.296815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.296822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.296840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.306816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.306877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.306902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.306910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.306917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.306935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.316803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.316859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.316874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.316881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.316887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.316902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.326858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.326934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.326948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.326955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.326962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.326976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.336849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.336896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.336911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.336918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.336924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.336939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.346891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.346941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.346955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.997 [2024-10-13 17:44:07.346962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.997 [2024-10-13 17:44:07.346968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.997 [2024-10-13 17:44:07.346982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-10-13 17:44:07.356876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.997 [2024-10-13 17:44:07.356922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.997 [2024-10-13 17:44:07.356936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.356947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.356955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.356968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.366938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.366985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.366999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.367005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.367012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.367025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.376960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.377039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.377053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.377060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.377075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.377089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.387011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.387068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.387081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.387088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.387095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.387108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.397003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.397056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.397075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.397083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.397089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.397103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.406902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.406951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.406966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.406973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.406979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.406993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.417043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.417106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.417120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.417127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.417133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.417146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.427127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.427179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.427193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.427200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.427206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.427219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.437113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.437163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.437176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.437183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.437189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.437202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.447131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.447178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.447191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.447201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.447208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.447221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.457160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.457211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.457224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.457231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.457237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.457250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.467139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.467222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.998 [2024-10-13 17:44:07.467235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.998 [2024-10-13 17:44:07.467242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.998 [2024-10-13 17:44:07.467248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.998 [2024-10-13 17:44:07.467261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-10-13 17:44:07.477208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.998 [2024-10-13 17:44:07.477256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.999 [2024-10-13 17:44:07.477269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.999 [2024-10-13 17:44:07.477276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.999 [2024-10-13 17:44:07.477282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.999 [2024-10-13 17:44:07.477296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-10-13 17:44:07.487265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.999 [2024-10-13 17:44:07.487337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.999 [2024-10-13 17:44:07.487350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.999 [2024-10-13 17:44:07.487357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.999 [2024-10-13 17:44:07.487363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.999 [2024-10-13 17:44:07.487376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-10-13 17:44:07.497280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.999 [2024-10-13 17:44:07.497326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.999 [2024-10-13 17:44:07.497339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.999 [2024-10-13 17:44:07.497346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.999 [2024-10-13 17:44:07.497352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.999 [2024-10-13 17:44:07.497365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-10-13 17:44:07.507335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.999 [2024-10-13 17:44:07.507392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.999 [2024-10-13 17:44:07.507405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.999 [2024-10-13 17:44:07.507412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.999 [2024-10-13 17:44:07.507418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.999 [2024-10-13 17:44:07.507431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-10-13 17:44:07.517349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.999 [2024-10-13 17:44:07.517400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.999 [2024-10-13 17:44:07.517413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.999 [2024-10-13 17:44:07.517420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.999 [2024-10-13 17:44:07.517426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:58.999 [2024-10-13 17:44:07.517440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.999 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.527366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.527415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.527429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.527436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.527442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.527455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.537307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.537352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.537365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.537376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.537382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.537395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.547463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.547519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.547533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.547539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.547545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.547559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.557317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.557376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.557389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.557396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.557402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.557415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.567448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.567488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.567501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.567508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.567514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.567527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.577459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.577520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.577533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.577540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.577546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.577559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.587570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.587627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.587640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.587647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.587653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.587666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.597558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.597660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.597672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.597679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.597685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.597699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.607457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.607505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.607518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.607525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.607531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.607544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.617607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.617702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.617714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.617721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.617727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.617740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.627698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.263 [2024-10-13 17:44:07.627753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.263 [2024-10-13 17:44:07.627770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.263 [2024-10-13 17:44:07.627777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.263 [2024-10-13 17:44:07.627783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.263 [2024-10-13 17:44:07.627797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.263 qpair failed and we were unable to recover it. 00:33:59.263 [2024-10-13 17:44:07.637674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.637725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.637739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.637746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.637753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.637766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.647689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.647773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.647786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.647793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.647799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.647812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.657597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.657661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.657677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.657684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.657691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.657705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.667771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.667844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.667857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.667864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.667870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.667883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.677779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.677830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.677844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.677851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.677857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.677870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.687815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.687859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.687872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.687879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.687885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.687898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.697701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.697769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.697782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.697788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.697795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.697808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.707906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.707968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.707980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.707987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.707993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.708006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.717862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.717913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.717929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.717936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.717942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.717955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.727835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.727886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.727901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.727907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.727914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.727927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.737969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.738028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.738041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.738048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.738054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.738071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.747977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.748028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.748041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.748048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.748055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.748072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.264 qpair failed and we were unable to recover it. 00:33:59.264 [2024-10-13 17:44:07.757977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.264 [2024-10-13 17:44:07.758029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.264 [2024-10-13 17:44:07.758042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.264 [2024-10-13 17:44:07.758049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.264 [2024-10-13 17:44:07.758055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.264 [2024-10-13 17:44:07.758079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.265 qpair failed and we were unable to recover it. 00:33:59.265 [2024-10-13 17:44:07.767999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.265 [2024-10-13 17:44:07.768048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.265 [2024-10-13 17:44:07.768060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.265 [2024-10-13 17:44:07.768072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.265 [2024-10-13 17:44:07.768078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.265 [2024-10-13 17:44:07.768091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.265 qpair failed and we were unable to recover it. 00:33:59.265 [2024-10-13 17:44:07.778052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.265 [2024-10-13 17:44:07.778101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.265 [2024-10-13 17:44:07.778114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.265 [2024-10-13 17:44:07.778121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.265 [2024-10-13 17:44:07.778127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.265 [2024-10-13 17:44:07.778141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.265 qpair failed and we were unable to recover it. 00:33:59.533 [2024-10-13 17:44:07.788102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.533 [2024-10-13 17:44:07.788155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.533 [2024-10-13 17:44:07.788168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.533 [2024-10-13 17:44:07.788175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.533 [2024-10-13 17:44:07.788181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.533 [2024-10-13 17:44:07.788194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.533 qpair failed and we were unable to recover it. 00:33:59.533 [2024-10-13 17:44:07.798089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.798185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.798198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.798205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.798212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.798225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.808114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.808162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.808179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.808186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.808193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.808206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.818024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.818072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.818087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.818094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.818100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.818115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.828219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.828271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.828286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.828293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.828299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.828312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.838217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.838264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.838277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.838284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.838291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.838304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.848243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.848313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.848327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.848334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.848340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.848357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.858131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.858176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.858190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.858196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.858202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.858216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.868317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.868370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.868383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.534 [2024-10-13 17:44:07.868390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.534 [2024-10-13 17:44:07.868396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.534 [2024-10-13 17:44:07.868409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.534 qpair failed and we were unable to recover it. 00:33:59.534 [2024-10-13 17:44:07.878283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.534 [2024-10-13 17:44:07.878337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.534 [2024-10-13 17:44:07.878350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.878357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.878364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.878377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.888352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.888397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.888410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.888417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.888423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.888436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.898339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.898381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.898398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.898404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.898410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.898423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.908441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.908495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.908509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.908515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.908521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.908534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.918389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.918441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.918453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.918460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.918466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.918479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.928441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.928515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.928528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.928535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.928541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.928554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.938477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.938524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.938537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.938543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.938550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.938567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.948542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.948594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.948608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.948615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.948621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.948634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.535 [2024-10-13 17:44:07.958535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.535 [2024-10-13 17:44:07.958586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.535 [2024-10-13 17:44:07.958600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.535 [2024-10-13 17:44:07.958607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.535 [2024-10-13 17:44:07.958613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.535 [2024-10-13 17:44:07.958627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.535 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:07.968596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:07.968646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:07.968662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:07.968669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:07.968675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:07.968689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:07.978443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:07.978498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:07.978514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:07.978520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:07.978527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:07.978541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:07.988658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:07.988712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:07.988729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:07.988736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:07.988742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:07.988755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:07.998647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:07.998693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:07.998707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:07.998713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:07.998719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:07.998733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:08.008667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:08.008745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:08.008758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:08.008765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:08.008771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:08.008784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:08.018696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:08.018749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:08.018762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:08.018769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:08.018775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:08.018788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:08.028633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:08.028685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:08.028700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:08.028706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:08.028713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:08.028732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:08.038746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:08.038795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:08.038809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:08.038817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:08.038824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:08.038838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.536 [2024-10-13 17:44:08.048777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.536 [2024-10-13 17:44:08.048834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.536 [2024-10-13 17:44:08.048859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.536 [2024-10-13 17:44:08.048867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.536 [2024-10-13 17:44:08.048873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.536 [2024-10-13 17:44:08.048892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.536 qpair failed and we were unable to recover it. 00:33:59.800 [2024-10-13 17:44:08.058822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.800 [2024-10-13 17:44:08.058869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.800 [2024-10-13 17:44:08.058884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.800 [2024-10-13 17:44:08.058891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.800 [2024-10-13 17:44:08.058898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.800 [2024-10-13 17:44:08.058912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.800 qpair failed and we were unable to recover it. 00:33:59.800 [2024-10-13 17:44:08.068786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.800 [2024-10-13 17:44:08.068838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.800 [2024-10-13 17:44:08.068852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.800 [2024-10-13 17:44:08.068859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.800 [2024-10-13 17:44:08.068865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.800 [2024-10-13 17:44:08.068879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.800 qpair failed and we were unable to recover it. 00:33:59.800 [2024-10-13 17:44:08.078863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.800 [2024-10-13 17:44:08.078916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.800 [2024-10-13 17:44:08.078934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.800 [2024-10-13 17:44:08.078941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.800 [2024-10-13 17:44:08.078947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.800 [2024-10-13 17:44:08.078961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.800 qpair failed and we were unable to recover it. 00:33:59.800 [2024-10-13 17:44:08.088870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.800 [2024-10-13 17:44:08.088914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.800 [2024-10-13 17:44:08.088927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.800 [2024-10-13 17:44:08.088934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.800 [2024-10-13 17:44:08.088940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.800 [2024-10-13 17:44:08.088953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.800 qpair failed and we were unable to recover it. 00:33:59.800 [2024-10-13 17:44:08.098908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.098955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.098967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.098974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.098980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.098994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.108850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.108902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.108915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.108922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.108928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.108941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.118959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.119010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.119024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.119031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.119041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.119055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.128971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.129021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.129035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.129041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.129048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.129061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.139000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.139046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.139059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.139071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.139077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.139090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.149004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.149056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.149074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.149081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.149087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.149101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.159051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.159102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.159116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.159123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.159129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.159143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.169059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.169115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.169128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.169135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.169141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.169154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.179135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.179186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.179199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.179206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.179212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.179226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.189172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.189220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.189232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.189239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.801 [2024-10-13 17:44:08.189245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.801 [2024-10-13 17:44:08.189258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.801 qpair failed and we were unable to recover it. 00:33:59.801 [2024-10-13 17:44:08.199187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.801 [2024-10-13 17:44:08.199237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.801 [2024-10-13 17:44:08.199249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.801 [2024-10-13 17:44:08.199256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.199262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.199276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.209178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.209223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.209236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.209243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.209252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.209266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.219201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.219246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.219259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.219266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.219272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.219286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.229301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.229352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.229367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.229373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.229379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.229393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.239298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.239347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.239360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.239366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.239373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.239386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.249282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.249325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.249339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.249345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.249351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.249364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.259369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.259422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.259435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.259441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.259447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.259460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.269461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.269511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.269524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.269530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.269536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.269550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.279400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.279452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.279465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.279472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.279478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.279492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.289387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.289436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.289450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.289457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.289464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.289477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.299420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.802 [2024-10-13 17:44:08.299465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.802 [2024-10-13 17:44:08.299478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.802 [2024-10-13 17:44:08.299484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.802 [2024-10-13 17:44:08.299494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.802 [2024-10-13 17:44:08.299507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.802 qpair failed and we were unable to recover it. 00:33:59.802 [2024-10-13 17:44:08.309514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.803 [2024-10-13 17:44:08.309566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.803 [2024-10-13 17:44:08.309579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.803 [2024-10-13 17:44:08.309586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.803 [2024-10-13 17:44:08.309592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.803 [2024-10-13 17:44:08.309605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.803 qpair failed and we were unable to recover it. 00:33:59.803 [2024-10-13 17:44:08.319513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.803 [2024-10-13 17:44:08.319567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.803 [2024-10-13 17:44:08.319579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.803 [2024-10-13 17:44:08.319586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.803 [2024-10-13 17:44:08.319592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:33:59.803 [2024-10-13 17:44:08.319605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.803 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.329559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.329606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.329621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.329627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.329633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.329647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.339563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.339607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.339622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.339629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.339635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.339649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.349599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.349653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.349667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.349674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.349680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.349693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.359603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.359671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.359684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.359690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.359697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.359710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.369615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.369672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.369685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.369692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.369698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.369711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.379532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.379572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.379588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.379594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.379601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.379615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.389726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.389778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.389792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.389799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.389809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.389822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.399631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.399683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.399696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.399703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.399709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.399722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.409766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.409812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.409825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.409831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.409838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.409850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.419767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.419821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.419845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.419853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.419860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.419879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.066 [2024-10-13 17:44:08.429838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.066 [2024-10-13 17:44:08.429895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.066 [2024-10-13 17:44:08.429920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.066 [2024-10-13 17:44:08.429928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.066 [2024-10-13 17:44:08.429935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.066 [2024-10-13 17:44:08.429954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.066 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.439824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.439874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.439889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.439896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.439903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.439917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.449906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.449972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.449986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.449994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.450000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.450014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.459868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.459923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.459936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.459944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.459950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.459964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.469940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.469994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.470007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.470014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.470021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.470034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.479943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.479997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.480011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.480022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.480028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.480042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.489937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.490011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.490024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.490031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.490037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.490050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.499989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.500048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.500065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.500073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.500079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.500093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.510057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.510117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.510130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.510137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.510143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.510156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.520045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.520100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.520113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.520120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.520126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.520139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.530076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.530126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.530140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.530147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.530153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.530167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.539961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.540007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.540020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.540027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.540034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.540048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.550165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.550215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.550228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.550235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.550241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.067 [2024-10-13 17:44:08.550255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.067 qpair failed and we were unable to recover it. 00:34:00.067 [2024-10-13 17:44:08.560150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.067 [2024-10-13 17:44:08.560197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.067 [2024-10-13 17:44:08.560210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.067 [2024-10-13 17:44:08.560216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.067 [2024-10-13 17:44:08.560223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.068 [2024-10-13 17:44:08.560236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.068 qpair failed and we were unable to recover it. 00:34:00.068 [2024-10-13 17:44:08.570165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.068 [2024-10-13 17:44:08.570212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.068 [2024-10-13 17:44:08.570225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.068 [2024-10-13 17:44:08.570239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.068 [2024-10-13 17:44:08.570245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.068 [2024-10-13 17:44:08.570259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.068 qpair failed and we were unable to recover it. 00:34:00.068 [2024-10-13 17:44:08.580221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.068 [2024-10-13 17:44:08.580264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.068 [2024-10-13 17:44:08.580278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.068 [2024-10-13 17:44:08.580284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.068 [2024-10-13 17:44:08.580291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.068 [2024-10-13 17:44:08.580304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.068 qpair failed and we were unable to recover it. 00:34:00.330 [2024-10-13 17:44:08.590152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.330 [2024-10-13 17:44:08.590251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.330 [2024-10-13 17:44:08.590265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.330 [2024-10-13 17:44:08.590272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.330 [2024-10-13 17:44:08.590278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.330 [2024-10-13 17:44:08.590292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.330 qpair failed and we were unable to recover it. 00:34:00.330 [2024-10-13 17:44:08.600258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.330 [2024-10-13 17:44:08.600340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.330 [2024-10-13 17:44:08.600354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.330 [2024-10-13 17:44:08.600361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.330 [2024-10-13 17:44:08.600367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.330 [2024-10-13 17:44:08.600381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.330 qpair failed and we were unable to recover it. 00:34:00.330 [2024-10-13 17:44:08.610206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.330 [2024-10-13 17:44:08.610255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.610268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.610275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.610281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.610294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.620298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.620349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.620362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.620369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.620375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.620388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.630391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.630443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.630457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.630464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.630470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.630483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.640230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.640282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.640295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.640301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.640308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.640321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.650401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.650447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.650460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.650467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.650473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.650486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.660423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.660470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.660484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.660494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.660500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.660513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.670494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.670574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.670587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.670594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.670600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.670613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.680478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.680530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.680543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.680550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.680556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.680569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.690503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.690552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.690565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.690571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.690577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.690590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.700522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.700577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.700589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.700596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.700602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.700615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.331 [2024-10-13 17:44:08.710593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.331 [2024-10-13 17:44:08.710647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.331 [2024-10-13 17:44:08.710660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.331 [2024-10-13 17:44:08.710667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.331 [2024-10-13 17:44:08.710673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.331 [2024-10-13 17:44:08.710685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.331 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.720554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.720605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.720617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.720624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.720630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.720643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.730602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.730652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.730665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.730672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.730678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.730691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.740637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.740722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.740735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.740742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.740748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.740761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.750726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.750779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.750793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.750803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.750809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.750823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.760698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.760755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.760780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.760788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.760795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.760814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.770677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.770750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.770765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.770772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.770778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.770793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.780722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.780767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.780781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.780788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.780795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.780809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.790813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.790867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.790880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.790888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.790896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.790909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.800787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.800850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.800863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.800870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.800876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.800889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.810831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.810885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.810898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.810905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.810911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.810924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.820865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.332 [2024-10-13 17:44:08.820914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.332 [2024-10-13 17:44:08.820927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.332 [2024-10-13 17:44:08.820934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.332 [2024-10-13 17:44:08.820940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.332 [2024-10-13 17:44:08.820953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.332 qpair failed and we were unable to recover it. 00:34:00.332 [2024-10-13 17:44:08.830926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.333 [2024-10-13 17:44:08.830995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.333 [2024-10-13 17:44:08.831009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.333 [2024-10-13 17:44:08.831016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.333 [2024-10-13 17:44:08.831022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.333 [2024-10-13 17:44:08.831035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.333 qpair failed and we were unable to recover it. 00:34:00.333 [2024-10-13 17:44:08.840946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.333 [2024-10-13 17:44:08.840997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.333 [2024-10-13 17:44:08.841013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.333 [2024-10-13 17:44:08.841020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.333 [2024-10-13 17:44:08.841026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.333 [2024-10-13 17:44:08.841039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.333 qpair failed and we were unable to recover it. 00:34:00.333 [2024-10-13 17:44:08.850907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.333 [2024-10-13 17:44:08.850955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.333 [2024-10-13 17:44:08.850969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.333 [2024-10-13 17:44:08.850975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.333 [2024-10-13 17:44:08.850982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.333 [2024-10-13 17:44:08.850995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.333 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.860974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.861023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.861037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.861043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.861050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.861068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.871034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.871084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.871098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.871105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.871111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.871124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.881003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.881071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.881085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.881092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.881098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.881112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.891058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.891109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.891123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.891129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.891136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.891149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.901081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.901123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.901137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.901143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.901150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.901163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.911149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.911199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.911211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.911218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.911225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.911238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.921135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.921218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.921231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.921238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.921244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.921257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.931123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.931167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.931185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.931191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.596 [2024-10-13 17:44:08.931198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.596 [2024-10-13 17:44:08.931211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.596 qpair failed and we were unable to recover it. 00:34:00.596 [2024-10-13 17:44:08.941109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.596 [2024-10-13 17:44:08.941163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.596 [2024-10-13 17:44:08.941175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.596 [2024-10-13 17:44:08.941182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:08.941188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:08.941201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:08.951249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:08.951298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:08.951311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:08.951317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:08.951323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:08.951337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:08.961252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:08.961304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:08.961317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:08.961324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:08.961330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:08.961343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:08.971144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:08.971194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:08.971207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:08.971214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:08.971220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:08.971236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:08.981302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:08.981385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:08.981399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:08.981405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:08.981411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:08.981425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:08.991366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:08.991418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:08.991432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:08.991439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:08.991445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:08.991461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.001349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.001400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.001414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.001421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.001427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.001440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.011379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.011424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.011438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.011444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.011451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.011464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.021369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.021420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.021436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.021443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.021449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.021462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.031499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.031553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.031567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.031573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.031580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.031593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.041469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.041519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.041532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.041540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.041547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.041561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.051479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.051543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.051556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.051562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.051569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.051582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.061530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.061579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.061592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.597 [2024-10-13 17:44:09.061600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.597 [2024-10-13 17:44:09.061607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.597 [2024-10-13 17:44:09.061627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.597 qpair failed and we were unable to recover it. 00:34:00.597 [2024-10-13 17:44:09.071616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.597 [2024-10-13 17:44:09.071677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.597 [2024-10-13 17:44:09.071689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.598 [2024-10-13 17:44:09.071696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.598 [2024-10-13 17:44:09.071702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.598 [2024-10-13 17:44:09.071715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.598 qpair failed and we were unable to recover it. 00:34:00.598 [2024-10-13 17:44:09.081591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.598 [2024-10-13 17:44:09.081645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.598 [2024-10-13 17:44:09.081659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.598 [2024-10-13 17:44:09.081666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.598 [2024-10-13 17:44:09.081672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.598 [2024-10-13 17:44:09.081685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.598 qpair failed and we were unable to recover it. 00:34:00.598 [2024-10-13 17:44:09.091483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.598 [2024-10-13 17:44:09.091527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.598 [2024-10-13 17:44:09.091541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.598 [2024-10-13 17:44:09.091548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.598 [2024-10-13 17:44:09.091555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.598 [2024-10-13 17:44:09.091568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.598 qpair failed and we were unable to recover it. 00:34:00.598 [2024-10-13 17:44:09.101633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.598 [2024-10-13 17:44:09.101683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.598 [2024-10-13 17:44:09.101697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.598 [2024-10-13 17:44:09.101704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.598 [2024-10-13 17:44:09.101710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.598 [2024-10-13 17:44:09.101724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.598 qpair failed and we were unable to recover it. 00:34:00.598 [2024-10-13 17:44:09.111695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.598 [2024-10-13 17:44:09.111749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.598 [2024-10-13 17:44:09.111766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.598 [2024-10-13 17:44:09.111773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.598 [2024-10-13 17:44:09.111779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.598 [2024-10-13 17:44:09.111792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.598 qpair failed and we were unable to recover it. 00:34:00.861 [2024-10-13 17:44:09.121693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.861 [2024-10-13 17:44:09.121739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.861 [2024-10-13 17:44:09.121753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.861 [2024-10-13 17:44:09.121760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.861 [2024-10-13 17:44:09.121766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.861 [2024-10-13 17:44:09.121779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.861 qpair failed and we were unable to recover it. 00:34:00.861 [2024-10-13 17:44:09.131723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.861 [2024-10-13 17:44:09.131771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.861 [2024-10-13 17:44:09.131785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.861 [2024-10-13 17:44:09.131792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.861 [2024-10-13 17:44:09.131799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.861 [2024-10-13 17:44:09.131812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.861 qpair failed and we were unable to recover it. 00:34:00.861 [2024-10-13 17:44:09.141753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.861 [2024-10-13 17:44:09.141802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.861 [2024-10-13 17:44:09.141815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.861 [2024-10-13 17:44:09.141821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.861 [2024-10-13 17:44:09.141828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.861 [2024-10-13 17:44:09.141841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.861 qpair failed and we were unable to recover it. 00:34:00.861 [2024-10-13 17:44:09.151818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.861 [2024-10-13 17:44:09.151895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.861 [2024-10-13 17:44:09.151909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.861 [2024-10-13 17:44:09.151916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.861 [2024-10-13 17:44:09.151922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.861 [2024-10-13 17:44:09.151941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.861 qpair failed and we were unable to recover it. 00:34:00.861 [2024-10-13 17:44:09.161805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.861 [2024-10-13 17:44:09.161880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.861 [2024-10-13 17:44:09.161893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.861 [2024-10-13 17:44:09.161900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.861 [2024-10-13 17:44:09.161906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.861 [2024-10-13 17:44:09.161919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.861 qpair failed and we were unable to recover it. 00:34:00.861 [2024-10-13 17:44:09.171828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.171876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.171889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.171896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.171902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.171915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.181873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.181960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.181973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.181980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.181986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.182000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.191933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.191984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.191997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.192004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.192010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.192023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.201923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.201986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.202002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.202009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.202015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.202029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.211967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.212044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.212057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.212069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.212076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.212089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.221991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.222058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.222077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.222084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.222090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.222103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.232053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.232107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.232121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.232127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.232134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.232147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.241897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.241951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.241964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.241970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.241977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.241998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.252055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.252102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.252115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.252122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.252128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.252142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.262080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.262137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.262150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.262157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.262163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.262177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.272122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.272175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.272187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.272194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.272200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.862 [2024-10-13 17:44:09.272213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.862 qpair failed and we were unable to recover it. 00:34:00.862 [2024-10-13 17:44:09.282190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.862 [2024-10-13 17:44:09.282260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.862 [2024-10-13 17:44:09.282273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.862 [2024-10-13 17:44:09.282280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.862 [2024-10-13 17:44:09.282287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.282300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.292208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.292281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.292298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.292305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.292311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.292326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.302197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.302242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.302256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.302262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.302268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.302282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.312283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.312335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.312348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.312355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.312361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.312374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.322279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.322332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.322345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.322351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.322358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.322371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.332310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.332407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.332421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.332427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.332437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.332451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.342321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.342373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.342389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.342396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.342403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.342418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.352347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.352398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.352412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.352419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.352425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.352438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.362342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.362396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.362409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.362416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.362422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.362435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.372357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.372402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.372415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.372421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.372428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.372440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:00.863 [2024-10-13 17:44:09.382339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.863 [2024-10-13 17:44:09.382396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.863 [2024-10-13 17:44:09.382410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.863 [2024-10-13 17:44:09.382417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.863 [2024-10-13 17:44:09.382423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:00.863 [2024-10-13 17:44:09.382436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.863 qpair failed and we were unable to recover it. 00:34:01.126 [2024-10-13 17:44:09.392502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.126 [2024-10-13 17:44:09.392552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.126 [2024-10-13 17:44:09.392565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.126 [2024-10-13 17:44:09.392572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.126 [2024-10-13 17:44:09.392578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.126 [2024-10-13 17:44:09.392591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.126 qpair failed and we were unable to recover it. 00:34:01.126 [2024-10-13 17:44:09.402345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.126 [2024-10-13 17:44:09.402393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.126 [2024-10-13 17:44:09.402408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.126 [2024-10-13 17:44:09.402414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.126 [2024-10-13 17:44:09.402421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.126 [2024-10-13 17:44:09.402436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.126 qpair failed and we were unable to recover it. 00:34:01.126 [2024-10-13 17:44:09.412515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.126 [2024-10-13 17:44:09.412562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.126 [2024-10-13 17:44:09.412576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.126 [2024-10-13 17:44:09.412583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.126 [2024-10-13 17:44:09.412589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.126 [2024-10-13 17:44:09.412602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.126 qpair failed and we were unable to recover it. 00:34:01.126 [2024-10-13 17:44:09.422529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.126 [2024-10-13 17:44:09.422579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.126 [2024-10-13 17:44:09.422592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.126 [2024-10-13 17:44:09.422599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.126 [2024-10-13 17:44:09.422609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.126 [2024-10-13 17:44:09.422622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.126 qpair failed and we were unable to recover it. 00:34:01.126 [2024-10-13 17:44:09.432625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.126 [2024-10-13 17:44:09.432674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.432688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.432694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.432701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.432714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.442587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.442638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.442651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.442658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.442664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.442677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.452582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.452631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.452645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.452652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.452659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.452673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.462630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.462674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.462687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.462694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.462700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.462713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.472717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.472777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.472790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.472796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.472803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.472815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.482699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.482758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.482782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.482790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.482797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.482816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.492731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.492782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.492807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.492816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.492823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.492841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.502647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.502706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.502720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.502728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.502734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.502748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.512814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.512867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.512880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.512887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.512897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7e960 00:34:01.127 [2024-10-13 17:44:09.512911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.513331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c3c0 is same with the state(5) to be set 00:34:01.127 [2024-10-13 17:44:09.522806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.522942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.523009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.523034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.523055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7158000b90 00:34:01.127 [2024-10-13 17:44:09.523124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 [2024-10-13 17:44:09.532843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.127 [2024-10-13 17:44:09.532920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.127 [2024-10-13 17:44:09.532949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.127 [2024-10-13 17:44:09.532965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.127 [2024-10-13 17:44:09.532979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7158000b90 00:34:01.127 [2024-10-13 17:44:09.533011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:01.127 qpair failed and we were unable to recover it. 00:34:01.127 Read completed with error (sct=0, sc=8) 00:34:01.127 starting I/O failed 00:34:01.127 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 [2024-10-13 17:44:09.533313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:01.128 [2024-10-13 17:44:09.542833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.128 [2024-10-13 17:44:09.542878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.128 [2024-10-13 17:44:09.542893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.128 [2024-10-13 17:44:09.542900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.128 [2024-10-13 17:44:09.542906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f715c000b90 00:34:01.128 [2024-10-13 17:44:09.542919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:01.128 qpair failed and we were unable to recover it. 00:34:01.128 [2024-10-13 17:44:09.552985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.128 [2024-10-13 17:44:09.553044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.128 [2024-10-13 17:44:09.553067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.128 [2024-10-13 17:44:09.553074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.128 [2024-10-13 17:44:09.553079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f715c000b90 00:34:01.128 [2024-10-13 17:44:09.553093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:01.128 qpair failed and we were unable to recover it. 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Write completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 Read completed with error (sct=0, sc=8) 00:34:01.128 starting I/O failed 00:34:01.128 [2024-10-13 17:44:09.553855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.128 [2024-10-13 17:44:09.562919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.128 [2024-10-13 17:44:09.563026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.128 [2024-10-13 17:44:09.563101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.128 [2024-10-13 17:44:09.563126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.128 [2024-10-13 17:44:09.563147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7164000b90 00:34:01.128 [2024-10-13 17:44:09.563201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.128 qpair failed and we were unable to recover it. 00:34:01.128 [2024-10-13 17:44:09.572950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.128 [2024-10-13 17:44:09.573022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.128 [2024-10-13 17:44:09.573052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.128 [2024-10-13 17:44:09.573075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.129 [2024-10-13 17:44:09.573090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7164000b90 00:34:01.129 [2024-10-13 17:44:09.573120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.129 qpair failed and we were unable to recover it. 00:34:01.129 [2024-10-13 17:44:09.573611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8c3c0 (9): Bad file descriptor 00:34:01.129 Initializing NVMe Controllers 00:34:01.129 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:01.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:01.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:01.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:01.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:01.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:01.129 Initialization complete. Launching workers. 00:34:01.129 Starting thread on core 1 00:34:01.129 Starting thread on core 2 00:34:01.129 Starting thread on core 3 00:34:01.129 Starting thread on core 0 00:34:01.129 17:44:09 -- host/target_disconnect.sh@59 -- # sync 00:34:01.129 00:34:01.129 real 0m11.322s 00:34:01.129 user 0m21.720s 00:34:01.129 sys 0m3.601s 00:34:01.129 17:44:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.129 17:44:09 -- common/autotest_common.sh@10 -- # set +x 00:34:01.129 ************************************ 00:34:01.129 END TEST nvmf_target_disconnect_tc2 00:34:01.129 ************************************ 00:34:01.129 17:44:09 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:34:01.129 17:44:09 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:01.129 17:44:09 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:34:01.129 17:44:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:01.129 17:44:09 -- nvmf/common.sh@116 -- # sync 00:34:01.129 17:44:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:01.129 17:44:09 -- nvmf/common.sh@119 -- # set +e 00:34:01.129 17:44:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:01.129 17:44:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:01.129 rmmod nvme_tcp 00:34:01.390 rmmod nvme_fabrics 00:34:01.390 rmmod nvme_keyring 00:34:01.390 17:44:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:01.390 17:44:09 -- nvmf/common.sh@123 -- # set -e 00:34:01.390 17:44:09 -- nvmf/common.sh@124 -- # return 0 00:34:01.390 17:44:09 -- nvmf/common.sh@477 -- # '[' -n 3422063 ']' 00:34:01.390 17:44:09 -- nvmf/common.sh@478 -- # killprocess 3422063 00:34:01.390 17:44:09 -- common/autotest_common.sh@926 -- # '[' -z 3422063 ']' 00:34:01.390 17:44:09 -- common/autotest_common.sh@930 -- # kill -0 3422063 00:34:01.390 17:44:09 -- common/autotest_common.sh@931 -- # uname 00:34:01.390 17:44:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:01.390 17:44:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3422063 00:34:01.390 17:44:09 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:34:01.390 17:44:09 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:34:01.390 17:44:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3422063' 00:34:01.390 killing process with pid 3422063 00:34:01.390 17:44:09 -- common/autotest_common.sh@945 -- # kill 3422063 00:34:01.390 17:44:09 -- common/autotest_common.sh@950 -- # wait 3422063 00:34:01.390 17:44:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:01.390 17:44:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:01.390 17:44:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:01.390 17:44:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:01.390 17:44:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:01.390 17:44:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.390 17:44:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:01.390 17:44:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.004 17:44:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:04.004 00:34:04.004 real 0m21.404s 00:34:04.004 user 0m49.289s 00:34:04.004 sys 0m9.596s 00:34:04.004 17:44:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.004 17:44:11 -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 ************************************ 00:34:04.004 END TEST nvmf_target_disconnect 00:34:04.004 ************************************ 00:34:04.004 17:44:11 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:34:04.004 17:44:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:04.004 17:44:11 -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 17:44:12 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:34:04.004 00:34:04.004 real 26m25.745s 00:34:04.004 user 70m32.370s 00:34:04.004 sys 7m27.819s 00:34:04.004 17:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.004 17:44:12 -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 ************************************ 00:34:04.004 END TEST nvmf_tcp 00:34:04.004 ************************************ 00:34:04.004 17:44:12 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:34:04.004 17:44:12 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:04.004 17:44:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:04.004 17:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:04.004 17:44:12 -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 ************************************ 00:34:04.004 START TEST spdkcli_nvmf_tcp 00:34:04.004 ************************************ 00:34:04.004 17:44:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:04.004 * Looking for test storage... 00:34:04.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:04.004 17:44:12 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:04.004 17:44:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:04.004 17:44:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:04.004 17:44:12 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.004 17:44:12 -- nvmf/common.sh@7 -- # uname -s 00:34:04.004 17:44:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.004 17:44:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.004 17:44:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.004 17:44:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.004 17:44:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.004 17:44:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.004 17:44:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.004 17:44:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.004 17:44:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.004 17:44:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.004 17:44:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:04.004 17:44:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:04.004 17:44:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.004 17:44:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.004 17:44:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.004 17:44:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.004 17:44:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.004 17:44:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.004 17:44:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.004 17:44:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 17:44:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 17:44:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 17:44:12 -- paths/export.sh@5 -- # export PATH 00:34:04.005 17:44:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.005 17:44:12 -- nvmf/common.sh@46 -- # : 0 00:34:04.005 17:44:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:04.005 17:44:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:04.005 17:44:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:04.005 17:44:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.005 17:44:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.005 17:44:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:04.005 17:44:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:04.005 17:44:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:04.005 17:44:12 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:04.005 17:44:12 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:04.005 17:44:12 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:04.005 17:44:12 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:04.005 17:44:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:04.005 17:44:12 -- common/autotest_common.sh@10 -- # set +x 00:34:04.005 17:44:12 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:04.005 17:44:12 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3423897 00:34:04.005 17:44:12 -- spdkcli/common.sh@34 -- # waitforlisten 3423897 00:34:04.005 17:44:12 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:04.005 17:44:12 -- common/autotest_common.sh@819 -- # '[' -z 3423897 ']' 00:34:04.005 17:44:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.005 17:44:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:04.005 17:44:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.005 17:44:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:04.005 17:44:12 -- common/autotest_common.sh@10 -- # set +x 00:34:04.005 [2024-10-13 17:44:12.280986] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:04.005 [2024-10-13 17:44:12.281079] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423897 ] 00:34:04.005 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.005 [2024-10-13 17:44:12.348877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:04.005 [2024-10-13 17:44:12.386340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:04.005 [2024-10-13 17:44:12.386606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.005 [2024-10-13 17:44:12.386608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.576 17:44:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:04.576 17:44:13 -- common/autotest_common.sh@852 -- # return 0 00:34:04.576 17:44:13 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:04.576 17:44:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:04.577 17:44:13 -- common/autotest_common.sh@10 -- # set +x 00:34:04.577 17:44:13 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:04.577 17:44:13 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:04.577 17:44:13 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:04.577 17:44:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:04.577 17:44:13 -- common/autotest_common.sh@10 -- # set +x 00:34:04.577 17:44:13 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:04.577 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:04.577 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:04.577 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:04.577 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:04.577 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:04.577 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:04.577 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:04.577 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:04.577 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:04.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:04.577 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:04.577 ' 00:34:05.148 [2024-10-13 17:44:13.427099] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:07.691 [2024-10-13 17:44:15.733483] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.632 [2024-10-13 17:44:17.057873] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:11.179 [2024-10-13 17:44:19.529413] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:13.722 [2024-10-13 17:44:21.672055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:15.104 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:15.104 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:15.104 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:15.104 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:15.104 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:15.104 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:15.105 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:15.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:15.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:15.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:15.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:15.105 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:15.105 17:44:23 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:15.105 17:44:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:15.105 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:34:15.105 17:44:23 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:15.105 17:44:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:15.105 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:34:15.105 17:44:23 -- spdkcli/nvmf.sh@69 -- # check_match 00:34:15.105 17:44:23 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:15.365 17:44:23 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:15.366 17:44:23 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:15.366 17:44:23 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:15.366 17:44:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:15.366 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:34:15.626 17:44:23 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:15.626 17:44:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:15.626 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:34:15.626 17:44:23 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:15.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:15.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:15.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:15.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:15.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:15.626 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:15.626 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:15.626 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:15.626 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:15.626 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:15.626 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:15.626 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:15.626 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:15.626 ' 00:34:20.925 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:20.925 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:20.925 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:20.925 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:20.925 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:20.925 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:20.926 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:20.926 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:20.926 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:20.926 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:20.926 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:20.926 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:20.926 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:20.926 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:20.926 17:44:29 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:20.926 17:44:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:20.926 17:44:29 -- common/autotest_common.sh@10 -- # set +x 00:34:21.186 17:44:29 -- spdkcli/nvmf.sh@90 -- # killprocess 3423897 00:34:21.186 17:44:29 -- common/autotest_common.sh@926 -- # '[' -z 3423897 ']' 00:34:21.186 17:44:29 -- common/autotest_common.sh@930 -- # kill -0 3423897 00:34:21.186 17:44:29 -- common/autotest_common.sh@931 -- # uname 00:34:21.186 17:44:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:21.186 17:44:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3423897 00:34:21.186 17:44:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:21.186 17:44:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:21.186 17:44:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3423897' 00:34:21.186 killing process with pid 3423897 00:34:21.186 17:44:29 -- common/autotest_common.sh@945 -- # kill 3423897 00:34:21.186 [2024-10-13 17:44:29.550096] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:21.186 17:44:29 -- common/autotest_common.sh@950 -- # wait 3423897 00:34:21.186 17:44:29 -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:21.186 17:44:29 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:21.186 17:44:29 -- spdkcli/common.sh@13 -- # '[' -n 3423897 ']' 00:34:21.186 17:44:29 -- spdkcli/common.sh@14 -- # killprocess 3423897 00:34:21.186 17:44:29 -- common/autotest_common.sh@926 -- # '[' -z 3423897 ']' 00:34:21.186 17:44:29 -- common/autotest_common.sh@930 -- # kill -0 3423897 00:34:21.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3423897) - No such process 00:34:21.186 17:44:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3423897 is not found' 00:34:21.186 Process with pid 3423897 is not found 00:34:21.186 17:44:29 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:21.186 17:44:29 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:21.186 17:44:29 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:21.186 00:34:21.186 real 0m17.582s 00:34:21.186 user 0m38.928s 00:34:21.186 sys 0m0.789s 00:34:21.186 17:44:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:21.186 17:44:29 -- common/autotest_common.sh@10 -- # set +x 00:34:21.186 ************************************ 00:34:21.186 END TEST spdkcli_nvmf_tcp 00:34:21.186 ************************************ 00:34:21.186 17:44:29 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:21.186 17:44:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:21.186 17:44:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:21.186 17:44:29 -- common/autotest_common.sh@10 -- # set +x 00:34:21.447 ************************************ 00:34:21.447 START TEST nvmf_identify_passthru 00:34:21.447 ************************************ 00:34:21.447 17:44:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:21.447 * Looking for test storage... 00:34:21.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.447 17:44:29 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.447 17:44:29 -- nvmf/common.sh@7 -- # uname -s 00:34:21.447 17:44:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.447 17:44:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.447 17:44:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.447 17:44:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.447 17:44:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.447 17:44:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.447 17:44:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.447 17:44:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.447 17:44:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.447 17:44:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.447 17:44:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.447 17:44:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.447 17:44:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.447 17:44:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.447 17:44:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.447 17:44:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.447 17:44:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.447 17:44:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.447 17:44:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.447 17:44:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- paths/export.sh@5 -- # export PATH 00:34:21.447 17:44:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- nvmf/common.sh@46 -- # : 0 00:34:21.447 17:44:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:21.447 17:44:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:21.447 17:44:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:21.447 17:44:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.447 17:44:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.447 17:44:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:21.447 17:44:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:21.447 17:44:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:21.447 17:44:29 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.447 17:44:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.447 17:44:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.447 17:44:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.447 17:44:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- paths/export.sh@5 -- # export PATH 00:34:21.447 17:44:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.447 17:44:29 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:21.447 17:44:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:21.447 17:44:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.447 17:44:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:21.447 17:44:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:21.447 17:44:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:21.447 17:44:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.447 17:44:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:21.447 17:44:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.447 17:44:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:21.447 17:44:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:21.447 17:44:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:21.447 17:44:29 -- common/autotest_common.sh@10 -- # set +x 00:34:29.591 17:44:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:29.591 17:44:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:29.591 17:44:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:29.591 17:44:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:29.591 17:44:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:29.591 17:44:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:29.591 17:44:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:29.591 17:44:37 -- nvmf/common.sh@294 -- # net_devs=() 00:34:29.591 17:44:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:29.591 17:44:37 -- nvmf/common.sh@295 -- # e810=() 00:34:29.591 17:44:37 -- nvmf/common.sh@295 -- # local -ga e810 00:34:29.591 17:44:37 -- nvmf/common.sh@296 -- # x722=() 00:34:29.591 17:44:37 -- nvmf/common.sh@296 -- # local -ga x722 00:34:29.591 17:44:37 -- nvmf/common.sh@297 -- # mlx=() 00:34:29.591 17:44:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:29.591 17:44:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.591 17:44:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:29.591 17:44:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:29.591 17:44:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:29.591 17:44:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:29.591 17:44:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:29.591 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:29.591 17:44:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:29.591 17:44:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:29.591 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:29.591 17:44:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:29.591 17:44:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:29.591 17:44:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:29.591 17:44:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.591 17:44:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:29.591 17:44:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.591 17:44:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:29.591 Found net devices under 0000:31:00.0: cvl_0_0 00:34:29.591 17:44:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.591 17:44:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:29.592 17:44:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.592 17:44:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:29.592 17:44:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.592 17:44:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:29.592 Found net devices under 0000:31:00.1: cvl_0_1 00:34:29.592 17:44:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.592 17:44:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:29.592 17:44:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:29.592 17:44:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:29.592 17:44:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:29.592 17:44:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:29.592 17:44:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:29.592 17:44:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.592 17:44:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:29.592 17:44:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:29.592 17:44:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:29.592 17:44:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:29.592 17:44:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:29.592 17:44:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:29.592 17:44:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.592 17:44:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:29.592 17:44:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:29.592 17:44:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:29.592 17:44:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:29.592 17:44:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:29.592 17:44:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:29.592 17:44:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:29.592 17:44:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:29.592 17:44:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:29.592 17:44:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:29.592 17:44:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:29.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:34:29.592 00:34:29.592 --- 10.0.0.2 ping statistics --- 00:34:29.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.592 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:34:29.592 17:44:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:29.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:34:29.592 00:34:29.592 --- 10.0.0.1 ping statistics --- 00:34:29.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.592 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:34:29.592 17:44:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.592 17:44:37 -- nvmf/common.sh@410 -- # return 0 00:34:29.592 17:44:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:29.592 17:44:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.592 17:44:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:29.592 17:44:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:29.592 17:44:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.592 17:44:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:29.592 17:44:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:29.592 17:44:37 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:29.592 17:44:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:29.592 17:44:37 -- common/autotest_common.sh@10 -- # set +x 00:34:29.592 17:44:37 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:29.592 17:44:37 -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:29.592 17:44:37 -- common/autotest_common.sh@1509 -- # local bdfs 00:34:29.592 17:44:37 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:29.592 17:44:37 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:29.592 17:44:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:29.592 17:44:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:29.592 17:44:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:29.592 17:44:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:29.592 17:44:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:29.592 17:44:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:29.592 17:44:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:34:29.592 17:44:37 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:34:29.592 17:44:37 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:34:29.592 17:44:37 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:34:29.592 17:44:37 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:29.592 17:44:37 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:29.592 17:44:37 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:29.592 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.592 17:44:37 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:34:29.592 17:44:37 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:29.592 17:44:37 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:29.592 17:44:37 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:29.592 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.162 17:44:38 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:34:30.162 17:44:38 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:30.162 17:44:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:30.162 17:44:38 -- common/autotest_common.sh@10 -- # set +x 00:34:30.162 17:44:38 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:30.162 17:44:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:30.162 17:44:38 -- common/autotest_common.sh@10 -- # set +x 00:34:30.162 17:44:38 -- target/identify_passthru.sh@31 -- # nvmfpid=3431289 00:34:30.162 17:44:38 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:30.162 17:44:38 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:30.162 17:44:38 -- target/identify_passthru.sh@35 -- # waitforlisten 3431289 00:34:30.162 17:44:38 -- common/autotest_common.sh@819 -- # '[' -z 3431289 ']' 00:34:30.162 17:44:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.162 17:44:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:30.162 17:44:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.162 17:44:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:30.162 17:44:38 -- common/autotest_common.sh@10 -- # set +x 00:34:30.162 [2024-10-13 17:44:38.543270] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:30.162 [2024-10-13 17:44:38.543340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.162 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.162 [2024-10-13 17:44:38.616732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:30.162 [2024-10-13 17:44:38.653722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:30.162 [2024-10-13 17:44:38.653858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.162 [2024-10-13 17:44:38.653869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.162 [2024-10-13 17:44:38.653878] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.162 [2024-10-13 17:44:38.654034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.162 [2024-10-13 17:44:38.654144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:30.162 [2024-10-13 17:44:38.654373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.162 [2024-10-13 17:44:38.654373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:31.103 17:44:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:31.103 17:44:39 -- common/autotest_common.sh@852 -- # return 0 00:34:31.103 17:44:39 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:31.103 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.103 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 INFO: Log level set to 20 00:34:31.103 INFO: Requests: 00:34:31.103 { 00:34:31.103 "jsonrpc": "2.0", 00:34:31.103 "method": "nvmf_set_config", 00:34:31.103 "id": 1, 00:34:31.103 "params": { 00:34:31.103 "admin_cmd_passthru": { 00:34:31.103 "identify_ctrlr": true 00:34:31.103 } 00:34:31.103 } 00:34:31.103 } 00:34:31.103 00:34:31.103 INFO: response: 00:34:31.103 { 00:34:31.103 "jsonrpc": "2.0", 00:34:31.103 "id": 1, 00:34:31.103 "result": true 00:34:31.103 } 00:34:31.103 00:34:31.103 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.103 17:44:39 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:31.103 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.103 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 INFO: Setting log level to 20 00:34:31.103 INFO: Setting log level to 20 00:34:31.103 INFO: Log level set to 20 00:34:31.103 INFO: Log level set to 20 00:34:31.103 INFO: Requests: 00:34:31.103 { 00:34:31.103 "jsonrpc": "2.0", 00:34:31.103 "method": "framework_start_init", 00:34:31.103 "id": 1 00:34:31.103 } 00:34:31.103 00:34:31.103 INFO: Requests: 00:34:31.103 { 00:34:31.103 "jsonrpc": "2.0", 00:34:31.104 "method": "framework_start_init", 00:34:31.104 "id": 1 00:34:31.104 } 00:34:31.104 00:34:31.104 [2024-10-13 17:44:39.399538] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:31.104 INFO: response: 00:34:31.104 { 00:34:31.104 "jsonrpc": "2.0", 00:34:31.104 "id": 1, 00:34:31.104 "result": true 00:34:31.104 } 00:34:31.104 00:34:31.104 INFO: response: 00:34:31.104 { 00:34:31.104 "jsonrpc": "2.0", 00:34:31.104 "id": 1, 00:34:31.104 "result": true 00:34:31.104 } 00:34:31.104 00:34:31.104 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.104 17:44:39 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:31.104 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.104 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.104 INFO: Setting log level to 40 00:34:31.104 INFO: Setting log level to 40 00:34:31.104 INFO: Setting log level to 40 00:34:31.104 [2024-10-13 17:44:39.412776] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.104 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.104 17:44:39 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:31.104 17:44:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:31.104 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.104 17:44:39 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:34:31.104 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.104 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.364 Nvme0n1 00:34:31.364 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.364 17:44:39 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:31.364 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.364 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.364 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.364 17:44:39 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:31.364 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.364 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.364 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.364 17:44:39 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.364 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.364 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.364 [2024-10-13 17:44:39.796357] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.364 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.364 17:44:39 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:31.364 17:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.364 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:34:31.364 [2024-10-13 17:44:39.804113] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:31.364 [ 00:34:31.364 { 00:34:31.364 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:31.364 "subtype": "Discovery", 00:34:31.364 "listen_addresses": [], 00:34:31.364 "allow_any_host": true, 00:34:31.364 "hosts": [] 00:34:31.364 }, 00:34:31.364 { 00:34:31.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.364 "subtype": "NVMe", 00:34:31.364 "listen_addresses": [ 00:34:31.364 { 00:34:31.364 "transport": "TCP", 00:34:31.364 "trtype": "TCP", 00:34:31.364 "adrfam": "IPv4", 00:34:31.364 "traddr": "10.0.0.2", 00:34:31.364 "trsvcid": "4420" 00:34:31.364 } 00:34:31.364 ], 00:34:31.364 "allow_any_host": true, 00:34:31.364 "hosts": [], 00:34:31.364 "serial_number": "SPDK00000000000001", 00:34:31.364 "model_number": "SPDK bdev Controller", 00:34:31.364 "max_namespaces": 1, 00:34:31.364 "min_cntlid": 1, 00:34:31.364 "max_cntlid": 65519, 00:34:31.364 "namespaces": [ 00:34:31.364 { 00:34:31.364 "nsid": 1, 00:34:31.364 "bdev_name": "Nvme0n1", 00:34:31.364 "name": "Nvme0n1", 00:34:31.364 "nguid": "3634473052605494002538450000002B", 00:34:31.364 "uuid": "36344730-5260-5494-0025-38450000002b" 00:34:31.364 } 00:34:31.364 ] 00:34:31.364 } 00:34:31.364 ] 00:34:31.364 17:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.364 17:44:39 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:31.364 17:44:39 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:31.364 17:44:39 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:31.364 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.625 17:44:39 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:34:31.625 17:44:39 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:31.625 17:44:39 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:31.625 17:44:39 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:31.625 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.625 17:44:40 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:34:31.625 17:44:40 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:34:31.625 17:44:40 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:34:31.625 17:44:40 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.625 17:44:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.625 17:44:40 -- common/autotest_common.sh@10 -- # set +x 00:34:31.885 17:44:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.885 17:44:40 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:31.885 17:44:40 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:31.886 17:44:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:31.886 17:44:40 -- nvmf/common.sh@116 -- # sync 00:34:31.886 17:44:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:31.886 17:44:40 -- nvmf/common.sh@119 -- # set +e 00:34:31.886 17:44:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:31.886 17:44:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:31.886 rmmod nvme_tcp 00:34:31.886 rmmod nvme_fabrics 00:34:31.886 rmmod nvme_keyring 00:34:31.886 17:44:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:31.886 17:44:40 -- nvmf/common.sh@123 -- # set -e 00:34:31.886 17:44:40 -- nvmf/common.sh@124 -- # return 0 00:34:31.886 17:44:40 -- nvmf/common.sh@477 -- # '[' -n 3431289 ']' 00:34:31.886 17:44:40 -- nvmf/common.sh@478 -- # killprocess 3431289 00:34:31.886 17:44:40 -- common/autotest_common.sh@926 -- # '[' -z 3431289 ']' 00:34:31.886 17:44:40 -- common/autotest_common.sh@930 -- # kill -0 3431289 00:34:31.886 17:44:40 -- common/autotest_common.sh@931 -- # uname 00:34:31.886 17:44:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:31.886 17:44:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3431289 00:34:31.886 17:44:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:31.886 17:44:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:31.886 17:44:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3431289' 00:34:31.886 killing process with pid 3431289 00:34:31.886 17:44:40 -- common/autotest_common.sh@945 -- # kill 3431289 00:34:31.886 [2024-10-13 17:44:40.295182] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:31.886 17:44:40 -- common/autotest_common.sh@950 -- # wait 3431289 00:34:32.147 17:44:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:32.147 17:44:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:32.147 17:44:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:32.147 17:44:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:32.147 17:44:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:32.147 17:44:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.147 17:44:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:32.147 17:44:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.694 17:44:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:34.694 00:34:34.694 real 0m12.897s 00:34:34.694 user 0m9.871s 00:34:34.694 sys 0m6.434s 00:34:34.694 17:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:34.694 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:34:34.694 ************************************ 00:34:34.694 END TEST nvmf_identify_passthru 00:34:34.694 ************************************ 00:34:34.694 17:44:42 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:34.695 17:44:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:34.695 17:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:34.695 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:34:34.695 ************************************ 00:34:34.695 START TEST nvmf_dif 00:34:34.695 ************************************ 00:34:34.695 17:44:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:34.695 * Looking for test storage... 00:34:34.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.695 17:44:42 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.695 17:44:42 -- nvmf/common.sh@7 -- # uname -s 00:34:34.695 17:44:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.695 17:44:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.695 17:44:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.695 17:44:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.695 17:44:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.695 17:44:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.695 17:44:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.695 17:44:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.695 17:44:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.695 17:44:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.695 17:44:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.695 17:44:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.695 17:44:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.695 17:44:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.695 17:44:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.695 17:44:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.695 17:44:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.695 17:44:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.695 17:44:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.695 17:44:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.695 17:44:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.695 17:44:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.695 17:44:42 -- paths/export.sh@5 -- # export PATH 00:34:34.695 17:44:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.695 17:44:42 -- nvmf/common.sh@46 -- # : 0 00:34:34.695 17:44:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:34.695 17:44:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:34.695 17:44:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:34.695 17:44:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.695 17:44:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.695 17:44:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:34.695 17:44:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:34.695 17:44:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:34.695 17:44:42 -- target/dif.sh@15 -- # NULL_META=16 00:34:34.695 17:44:42 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:34.695 17:44:42 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:34.695 17:44:42 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:34.695 17:44:42 -- target/dif.sh@135 -- # nvmftestinit 00:34:34.695 17:44:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:34.695 17:44:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.695 17:44:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:34.695 17:44:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:34.695 17:44:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:34.695 17:44:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.695 17:44:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:34.695 17:44:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.695 17:44:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:34.695 17:44:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:34.695 17:44:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:34.695 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:34:42.843 17:44:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:42.843 17:44:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:42.843 17:44:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:42.843 17:44:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:42.843 17:44:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:42.843 17:44:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:42.843 17:44:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:42.843 17:44:49 -- nvmf/common.sh@294 -- # net_devs=() 00:34:42.843 17:44:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:42.843 17:44:49 -- nvmf/common.sh@295 -- # e810=() 00:34:42.843 17:44:49 -- nvmf/common.sh@295 -- # local -ga e810 00:34:42.843 17:44:49 -- nvmf/common.sh@296 -- # x722=() 00:34:42.843 17:44:49 -- nvmf/common.sh@296 -- # local -ga x722 00:34:42.843 17:44:49 -- nvmf/common.sh@297 -- # mlx=() 00:34:42.843 17:44:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:42.843 17:44:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.843 17:44:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:42.843 17:44:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:42.843 17:44:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:42.843 17:44:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:42.843 17:44:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:42.843 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:42.843 17:44:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:42.843 17:44:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:42.843 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:42.843 17:44:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:42.843 17:44:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:42.843 17:44:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.843 17:44:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:42.843 17:44:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.843 17:44:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:42.843 Found net devices under 0000:31:00.0: cvl_0_0 00:34:42.843 17:44:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.843 17:44:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:42.843 17:44:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.843 17:44:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:42.843 17:44:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.843 17:44:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:42.843 Found net devices under 0000:31:00.1: cvl_0_1 00:34:42.843 17:44:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.843 17:44:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:42.843 17:44:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:42.843 17:44:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:42.843 17:44:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:42.843 17:44:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.843 17:44:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.843 17:44:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.843 17:44:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:42.843 17:44:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.843 17:44:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.843 17:44:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:42.843 17:44:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.843 17:44:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.843 17:44:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:42.843 17:44:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:42.843 17:44:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.843 17:44:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.843 17:44:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.843 17:44:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.843 17:44:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:42.843 17:44:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.843 17:44:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.843 17:44:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.843 17:44:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:42.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:34:42.843 00:34:42.843 --- 10.0.0.2 ping statistics --- 00:34:42.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.843 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:34:42.843 17:44:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:34:42.843 00:34:42.843 --- 10.0.0.1 ping statistics --- 00:34:42.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.843 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:34:42.844 17:44:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.844 17:44:50 -- nvmf/common.sh@410 -- # return 0 00:34:42.844 17:44:50 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:42.844 17:44:50 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:45.391 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:45.391 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:45.391 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:45.652 17:44:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.652 17:44:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:45.652 17:44:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:45.652 17:44:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.652 17:44:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:45.652 17:44:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:45.652 17:44:54 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:45.652 17:44:54 -- target/dif.sh@137 -- # nvmfappstart 00:34:45.652 17:44:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:45.652 17:44:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:45.652 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:34:45.652 17:44:54 -- nvmf/common.sh@469 -- # nvmfpid=3437399 00:34:45.652 17:44:54 -- nvmf/common.sh@470 -- # waitforlisten 3437399 00:34:45.652 17:44:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:45.652 17:44:54 -- common/autotest_common.sh@819 -- # '[' -z 3437399 ']' 00:34:45.652 17:44:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.652 17:44:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:45.652 17:44:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.652 17:44:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:45.652 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:34:45.652 [2024-10-13 17:44:54.172864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:45.652 [2024-10-13 17:44:54.172913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.913 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.913 [2024-10-13 17:44:54.242963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.913 [2024-10-13 17:44:54.275569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:45.913 [2024-10-13 17:44:54.275693] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.913 [2024-10-13 17:44:54.275702] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.913 [2024-10-13 17:44:54.275710] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.913 [2024-10-13 17:44:54.275735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.485 17:44:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:46.485 17:44:54 -- common/autotest_common.sh@852 -- # return 0 00:34:46.485 17:44:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:46.485 17:44:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:46.485 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:34:46.485 17:44:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.485 17:44:54 -- target/dif.sh@139 -- # create_transport 00:34:46.485 17:44:54 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:46.485 17:44:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:46.485 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:34:46.485 [2024-10-13 17:44:54.988898] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.485 17:44:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.485 17:44:54 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:46.485 17:44:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:46.485 17:44:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:46.485 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:34:46.485 ************************************ 00:34:46.485 START TEST fio_dif_1_default 00:34:46.485 ************************************ 00:34:46.485 17:44:55 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:46.485 17:44:55 -- target/dif.sh@86 -- # create_subsystems 0 00:34:46.485 17:44:55 -- target/dif.sh@28 -- # local sub 00:34:46.485 17:44:55 -- target/dif.sh@30 -- # for sub in "$@" 00:34:46.485 17:44:55 -- target/dif.sh@31 -- # create_subsystem 0 00:34:46.485 17:44:55 -- target/dif.sh@18 -- # local sub_id=0 00:34:46.485 17:44:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:46.485 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:46.485 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:34:46.746 bdev_null0 00:34:46.746 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.746 17:44:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:46.746 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:46.746 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:34:46.746 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.746 17:44:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:46.746 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:46.746 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:34:46.746 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.746 17:44:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:46.746 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:46.746 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:34:46.746 [2024-10-13 17:44:55.045197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.746 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.746 17:44:55 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:46.746 17:44:55 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:46.746 17:44:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:46.746 17:44:55 -- nvmf/common.sh@520 -- # config=() 00:34:46.746 17:44:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.746 17:44:55 -- nvmf/common.sh@520 -- # local subsystem config 00:34:46.746 17:44:55 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.746 17:44:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:46.746 17:44:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:46.746 { 00:34:46.746 "params": { 00:34:46.746 "name": "Nvme$subsystem", 00:34:46.746 "trtype": "$TEST_TRANSPORT", 00:34:46.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:46.746 "adrfam": "ipv4", 00:34:46.746 "trsvcid": "$NVMF_PORT", 00:34:46.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:46.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:46.746 "hdgst": ${hdgst:-false}, 00:34:46.746 "ddgst": ${ddgst:-false} 00:34:46.746 }, 00:34:46.746 "method": "bdev_nvme_attach_controller" 00:34:46.746 } 00:34:46.746 EOF 00:34:46.746 )") 00:34:46.746 17:44:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:46.746 17:44:55 -- target/dif.sh@82 -- # gen_fio_conf 00:34:46.746 17:44:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:46.746 17:44:55 -- target/dif.sh@54 -- # local file 00:34:46.746 17:44:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:46.746 17:44:55 -- target/dif.sh@56 -- # cat 00:34:46.746 17:44:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.746 17:44:55 -- common/autotest_common.sh@1320 -- # shift 00:34:46.746 17:44:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:46.746 17:44:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.746 17:44:55 -- nvmf/common.sh@542 -- # cat 00:34:46.746 17:44:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.746 17:44:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:46.746 17:44:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:46.746 17:44:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:46.746 17:44:55 -- target/dif.sh@72 -- # (( file <= files )) 00:34:46.746 17:44:55 -- nvmf/common.sh@544 -- # jq . 00:34:46.746 17:44:55 -- nvmf/common.sh@545 -- # IFS=, 00:34:46.746 17:44:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:46.746 "params": { 00:34:46.746 "name": "Nvme0", 00:34:46.746 "trtype": "tcp", 00:34:46.746 "traddr": "10.0.0.2", 00:34:46.746 "adrfam": "ipv4", 00:34:46.746 "trsvcid": "4420", 00:34:46.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:46.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:46.746 "hdgst": false, 00:34:46.746 "ddgst": false 00:34:46.746 }, 00:34:46.746 "method": "bdev_nvme_attach_controller" 00:34:46.747 }' 00:34:46.747 17:44:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:46.747 17:44:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:46.747 17:44:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.747 17:44:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.747 17:44:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:46.747 17:44:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:46.747 17:44:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:46.747 17:44:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:46.747 17:44:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:46.747 17:44:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.007 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:47.007 fio-3.35 00:34:47.007 Starting 1 thread 00:34:47.007 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.267 [2024-10-13 17:44:55.772734] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:47.267 [2024-10-13 17:44:55.772778] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:59.502 00:34:59.502 filename0: (groupid=0, jobs=1): err= 0: pid=3437932: Sun Oct 13 17:45:05 2024 00:34:59.502 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:34:59.502 slat (nsec): min=5339, max=57178, avg=6236.80, stdev=2233.01 00:34:59.502 clat (usec): min=40843, max=43319, avg=41025.07, stdev=248.92 00:34:59.502 lat (usec): min=40851, max=43354, avg=41031.31, stdev=249.64 00:34:59.502 clat percentiles (usec): 00:34:59.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:59.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:59.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:59.502 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:34:59.502 | 99.99th=[43254] 00:34:59.502 bw ( KiB/s): min= 383, max= 416, per=99.53%, avg=388.75, stdev=11.75, samples=20 00:34:59.502 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:34:59.502 lat (msec) : 50=100.00% 00:34:59.502 cpu : usr=94.37%, sys=5.38%, ctx=14, majf=0, minf=213 00:34:59.502 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.502 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.502 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:59.502 00:34:59.502 Run status group 0 (all jobs): 00:34:59.502 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10015-10015msec 00:34:59.502 17:45:06 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:59.502 17:45:06 -- target/dif.sh@43 -- # local sub 00:34:59.502 17:45:06 -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.502 17:45:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:59.502 17:45:06 -- target/dif.sh@36 -- # local sub_id=0 00:34:59.502 17:45:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 00:34:59.502 real 0m11.046s 00:34:59.502 user 0m27.033s 00:34:59.502 sys 0m0.853s 00:34:59.502 17:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 ************************************ 00:34:59.502 END TEST fio_dif_1_default 00:34:59.502 ************************************ 00:34:59.502 17:45:06 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:59.502 17:45:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:59.502 17:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 ************************************ 00:34:59.502 START TEST fio_dif_1_multi_subsystems 00:34:59.502 ************************************ 00:34:59.502 17:45:06 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:59.502 17:45:06 -- target/dif.sh@92 -- # local files=1 00:34:59.502 17:45:06 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:59.502 17:45:06 -- target/dif.sh@28 -- # local sub 00:34:59.502 17:45:06 -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.502 17:45:06 -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.502 17:45:06 -- target/dif.sh@18 -- # local sub_id=0 00:34:59.502 17:45:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 bdev_null0 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 [2024-10-13 17:45:06.139326] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.502 17:45:06 -- target/dif.sh@31 -- # create_subsystem 1 00:34:59.502 17:45:06 -- target/dif.sh@18 -- # local sub_id=1 00:34:59.502 17:45:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 bdev_null1 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.502 17:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.502 17:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:59.502 17:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.502 17:45:06 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:59.502 17:45:06 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:59.502 17:45:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:59.502 17:45:06 -- nvmf/common.sh@520 -- # config=() 00:34:59.502 17:45:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.502 17:45:06 -- nvmf/common.sh@520 -- # local subsystem config 00:34:59.502 17:45:06 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.502 17:45:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:59.502 17:45:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:59.502 { 00:34:59.502 "params": { 00:34:59.502 "name": "Nvme$subsystem", 00:34:59.502 "trtype": "$TEST_TRANSPORT", 00:34:59.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.502 "adrfam": "ipv4", 00:34:59.502 "trsvcid": "$NVMF_PORT", 00:34:59.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.502 "hdgst": ${hdgst:-false}, 00:34:59.502 "ddgst": ${ddgst:-false} 00:34:59.502 }, 00:34:59.502 "method": "bdev_nvme_attach_controller" 00:34:59.502 } 00:34:59.502 EOF 00:34:59.502 )") 00:34:59.502 17:45:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:59.502 17:45:06 -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.502 17:45:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.502 17:45:06 -- target/dif.sh@54 -- # local file 00:34:59.502 17:45:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:59.502 17:45:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.502 17:45:06 -- target/dif.sh@56 -- # cat 00:34:59.502 17:45:06 -- common/autotest_common.sh@1320 -- # shift 00:34:59.502 17:45:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:59.502 17:45:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.502 17:45:06 -- nvmf/common.sh@542 -- # cat 00:34:59.502 17:45:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.502 17:45:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:59.502 17:45:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.502 17:45:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:59.502 17:45:06 -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.502 17:45:06 -- target/dif.sh@73 -- # cat 00:34:59.502 17:45:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:59.502 17:45:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:59.502 { 00:34:59.502 "params": { 00:34:59.502 "name": "Nvme$subsystem", 00:34:59.503 "trtype": "$TEST_TRANSPORT", 00:34:59.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.503 "adrfam": "ipv4", 00:34:59.503 "trsvcid": "$NVMF_PORT", 00:34:59.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.503 "hdgst": ${hdgst:-false}, 00:34:59.503 "ddgst": ${ddgst:-false} 00:34:59.503 }, 00:34:59.503 "method": "bdev_nvme_attach_controller" 00:34:59.503 } 00:34:59.503 EOF 00:34:59.503 )") 00:34:59.503 17:45:06 -- target/dif.sh@72 -- # (( file++ )) 00:34:59.503 17:45:06 -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.503 17:45:06 -- nvmf/common.sh@542 -- # cat 00:34:59.503 17:45:06 -- nvmf/common.sh@544 -- # jq . 00:34:59.503 17:45:06 -- nvmf/common.sh@545 -- # IFS=, 00:34:59.503 17:45:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:59.503 "params": { 00:34:59.503 "name": "Nvme0", 00:34:59.503 "trtype": "tcp", 00:34:59.503 "traddr": "10.0.0.2", 00:34:59.503 "adrfam": "ipv4", 00:34:59.503 "trsvcid": "4420", 00:34:59.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.503 "hdgst": false, 00:34:59.503 "ddgst": false 00:34:59.503 }, 00:34:59.503 "method": "bdev_nvme_attach_controller" 00:34:59.503 },{ 00:34:59.503 "params": { 00:34:59.503 "name": "Nvme1", 00:34:59.503 "trtype": "tcp", 00:34:59.503 "traddr": "10.0.0.2", 00:34:59.503 "adrfam": "ipv4", 00:34:59.503 "trsvcid": "4420", 00:34:59.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.503 "hdgst": false, 00:34:59.503 "ddgst": false 00:34:59.503 }, 00:34:59.503 "method": "bdev_nvme_attach_controller" 00:34:59.503 }' 00:34:59.503 17:45:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:59.503 17:45:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:59.503 17:45:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.503 17:45:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.503 17:45:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:59.503 17:45:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:59.503 17:45:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:59.503 17:45:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:59.503 17:45:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.503 17:45:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.503 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:59.503 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:59.503 fio-3.35 00:34:59.503 Starting 2 threads 00:34:59.503 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.503 [2024-10-13 17:45:07.167772] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:59.503 [2024-10-13 17:45:07.167814] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:09.515 00:35:09.515 filename0: (groupid=0, jobs=1): err= 0: pid=3440362: Sun Oct 13 17:45:17 2024 00:35:09.515 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:35:09.515 slat (nsec): min=5339, max=23031, avg=6413.69, stdev=1418.92 00:35:09.515 clat (usec): min=40838, max=42959, avg=41010.04, stdev=187.94 00:35:09.515 lat (usec): min=40844, max=42964, avg=41016.45, stdev=188.08 00:35:09.515 clat percentiles (usec): 00:35:09.515 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:09.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:09.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:09.515 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:09.515 | 99.99th=[42730] 00:35:09.515 bw ( KiB/s): min= 384, max= 416, per=49.45%, avg=388.80, stdev=11.72, samples=20 00:35:09.515 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:09.515 lat (msec) : 50=100.00% 00:35:09.515 cpu : usr=96.68%, sys=3.11%, ctx=14, majf=0, minf=181 00:35:09.515 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.515 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.515 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:09.515 filename1: (groupid=0, jobs=1): err= 0: pid=3440363: Sun Oct 13 17:45:17 2024 00:35:09.515 read: IOPS=98, BW=395KiB/s (404kB/s)(3952KiB/10012msec) 00:35:09.515 slat (nsec): min=5336, max=25769, avg=6353.07, stdev=1435.44 00:35:09.515 clat (usec): min=917, max=42995, avg=40516.19, stdev=5098.69 00:35:09.515 lat (usec): min=922, max=43001, avg=40522.54, stdev=5098.80 00:35:09.515 clat percentiles (usec): 00:35:09.515 | 1.00th=[ 938], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:09.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:09.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:09.515 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:09.515 | 99.99th=[43254] 00:35:09.515 bw ( KiB/s): min= 384, max= 448, per=50.09%, avg=393.60, stdev=21.02, samples=20 00:35:09.515 iops : min= 96, max= 112, avg=98.40, stdev= 5.26, samples=20 00:35:09.515 lat (usec) : 1000=1.62% 00:35:09.515 lat (msec) : 50=98.38% 00:35:09.515 cpu : usr=96.23%, sys=3.56%, ctx=12, majf=0, minf=111 00:35:09.515 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.515 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.515 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:09.515 00:35:09.515 Run status group 0 (all jobs): 00:35:09.515 READ: bw=785KiB/s (803kB/s), 390KiB/s-395KiB/s (399kB/s-404kB/s), io=7856KiB (8045kB), run=10011-10012msec 00:35:09.515 17:45:17 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:09.515 17:45:17 -- target/dif.sh@43 -- # local sub 00:35:09.515 17:45:17 -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.515 17:45:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:09.515 17:45:17 -- target/dif.sh@36 -- # local sub_id=0 00:35:09.515 17:45:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:09.515 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.515 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.515 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.515 17:45:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:09.515 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.515 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 17:45:17 -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.516 17:45:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:09.516 17:45:17 -- target/dif.sh@36 -- # local sub_id=1 00:35:09.516 17:45:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.516 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 17:45:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:09.516 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 00:35:09.516 real 0m11.354s 00:35:09.516 user 0m30.866s 00:35:09.516 sys 0m1.014s 00:35:09.516 17:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 ************************************ 00:35:09.516 END TEST fio_dif_1_multi_subsystems 00:35:09.516 ************************************ 00:35:09.516 17:45:17 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:09.516 17:45:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:09.516 17:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 ************************************ 00:35:09.516 START TEST fio_dif_rand_params 00:35:09.516 ************************************ 00:35:09.516 17:45:17 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:35:09.516 17:45:17 -- target/dif.sh@100 -- # local NULL_DIF 00:35:09.516 17:45:17 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:09.516 17:45:17 -- target/dif.sh@103 -- # NULL_DIF=3 00:35:09.516 17:45:17 -- target/dif.sh@103 -- # bs=128k 00:35:09.516 17:45:17 -- target/dif.sh@103 -- # numjobs=3 00:35:09.516 17:45:17 -- target/dif.sh@103 -- # iodepth=3 00:35:09.516 17:45:17 -- target/dif.sh@103 -- # runtime=5 00:35:09.516 17:45:17 -- target/dif.sh@105 -- # create_subsystems 0 00:35:09.516 17:45:17 -- target/dif.sh@28 -- # local sub 00:35:09.516 17:45:17 -- target/dif.sh@30 -- # for sub in "$@" 00:35:09.516 17:45:17 -- target/dif.sh@31 -- # create_subsystem 0 00:35:09.516 17:45:17 -- target/dif.sh@18 -- # local sub_id=0 00:35:09.516 17:45:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:09.516 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 bdev_null0 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 17:45:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:09.516 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 17:45:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:09.516 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 17:45:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:09.516 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.516 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:35:09.516 [2024-10-13 17:45:17.540446] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.516 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.516 17:45:17 -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:09.516 17:45:17 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:09.516 17:45:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:09.516 17:45:17 -- nvmf/common.sh@520 -- # config=() 00:35:09.516 17:45:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.516 17:45:17 -- nvmf/common.sh@520 -- # local subsystem config 00:35:09.516 17:45:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.516 17:45:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:09.516 17:45:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:09.516 { 00:35:09.516 "params": { 00:35:09.516 "name": "Nvme$subsystem", 00:35:09.516 "trtype": "$TEST_TRANSPORT", 00:35:09.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:09.516 "adrfam": "ipv4", 00:35:09.516 "trsvcid": "$NVMF_PORT", 00:35:09.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:09.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:09.516 "hdgst": ${hdgst:-false}, 00:35:09.516 "ddgst": ${ddgst:-false} 00:35:09.516 }, 00:35:09.516 "method": "bdev_nvme_attach_controller" 00:35:09.516 } 00:35:09.516 EOF 00:35:09.516 )") 00:35:09.516 17:45:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:09.516 17:45:17 -- target/dif.sh@82 -- # gen_fio_conf 00:35:09.516 17:45:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:09.516 17:45:17 -- target/dif.sh@54 -- # local file 00:35:09.516 17:45:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:09.516 17:45:17 -- target/dif.sh@56 -- # cat 00:35:09.516 17:45:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:09.516 17:45:17 -- common/autotest_common.sh@1320 -- # shift 00:35:09.516 17:45:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:09.517 17:45:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:09.517 17:45:17 -- nvmf/common.sh@542 -- # cat 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:09.517 17:45:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:09.517 17:45:17 -- target/dif.sh@72 -- # (( file <= files )) 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:09.517 17:45:17 -- nvmf/common.sh@544 -- # jq . 00:35:09.517 17:45:17 -- nvmf/common.sh@545 -- # IFS=, 00:35:09.517 17:45:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:09.517 "params": { 00:35:09.517 "name": "Nvme0", 00:35:09.517 "trtype": "tcp", 00:35:09.517 "traddr": "10.0.0.2", 00:35:09.517 "adrfam": "ipv4", 00:35:09.517 "trsvcid": "4420", 00:35:09.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.517 "hdgst": false, 00:35:09.517 "ddgst": false 00:35:09.517 }, 00:35:09.517 "method": "bdev_nvme_attach_controller" 00:35:09.517 }' 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:09.517 17:45:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:09.517 17:45:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:09.517 17:45:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:09.517 17:45:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:09.517 17:45:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:09.517 17:45:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.517 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:09.517 ... 00:35:09.517 fio-3.35 00:35:09.517 Starting 3 threads 00:35:09.517 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.088 [2024-10-13 17:45:18.320721] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:10.088 [2024-10-13 17:45:18.320770] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:15.379 00:35:15.379 filename0: (groupid=0, jobs=1): err= 0: pid=3443190: Sun Oct 13 17:45:23 2024 00:35:15.379 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(148MiB/5043msec) 00:35:15.379 slat (nsec): min=5353, max=33587, avg=6572.65, stdev=1511.35 00:35:15.379 clat (usec): min=4956, max=55689, avg=12702.85, stdev=9689.81 00:35:15.379 lat (usec): min=4962, max=55695, avg=12709.43, stdev=9689.88 00:35:15.379 clat percentiles (usec): 00:35:15.379 | 1.00th=[ 5735], 5.00th=[ 7242], 10.00th=[ 7898], 20.00th=[ 8586], 00:35:15.379 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10683], 60.00th=[11207], 00:35:15.379 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13304], 95.00th=[48497], 00:35:15.379 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55313], 99.95th=[55837], 00:35:15.379 | 99.99th=[55837] 00:35:15.379 bw ( KiB/s): min=21504, max=38144, per=34.11%, avg=30336.00, stdev=4780.57, samples=10 00:35:15.379 iops : min= 168, max= 298, avg=237.00, stdev=37.35, samples=10 00:35:15.379 lat (msec) : 10=39.43%, 20=54.59%, 50=2.70%, 100=3.29% 00:35:15.379 cpu : usr=94.29%, sys=5.43%, ctx=16, majf=0, minf=104 00:35:15.379 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.379 issued rwts: total=1187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.379 filename0: (groupid=0, jobs=1): err= 0: pid=3443191: Sun Oct 13 17:45:23 2024 00:35:15.379 read: IOPS=187, BW=23.4MiB/s (24.6MB/s)(118MiB/5047msec) 00:35:15.379 slat (nsec): min=5384, max=37845, avg=7252.79, stdev=1962.37 00:35:15.379 clat (usec): min=5882, max=91452, avg=15948.97, stdev=14803.28 00:35:15.379 lat (usec): min=5888, max=91458, avg=15956.22, stdev=14803.37 00:35:15.379 clat percentiles (usec): 00:35:15.379 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8848], 00:35:15.379 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:35:15.379 | 70.00th=[11469], 80.00th=[12125], 90.00th=[50070], 95.00th=[51643], 00:35:15.379 | 99.00th=[53216], 99.50th=[53216], 99.90th=[91751], 99.95th=[91751], 00:35:15.379 | 99.99th=[91751] 00:35:15.379 bw ( KiB/s): min=11520, max=39168, per=27.17%, avg=24166.40, stdev=7882.32, samples=10 00:35:15.379 iops : min= 90, max= 306, avg=188.80, stdev=61.58, samples=10 00:35:15.379 lat (msec) : 10=34.88%, 20=50.95%, 50=3.70%, 100=10.47% 00:35:15.379 cpu : usr=94.83%, sys=4.91%, ctx=9, majf=0, minf=139 00:35:15.379 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.379 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.379 filename0: (groupid=0, jobs=1): err= 0: pid=3443192: Sun Oct 13 17:45:23 2024 00:35:15.379 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(172MiB/5044msec) 00:35:15.379 slat (nsec): min=5396, max=30642, avg=7215.23, stdev=1600.15 00:35:15.379 clat (usec): min=5229, max=90450, avg=10973.12, stdev=6701.38 00:35:15.379 lat (usec): min=5235, max=90456, avg=10980.34, stdev=6701.32 00:35:15.379 clat percentiles (usec): 00:35:15.379 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7898], 00:35:15.379 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10945], 00:35:15.379 | 70.00th=[11731], 80.00th=[12780], 90.00th=[13960], 95.00th=[14877], 00:35:15.379 | 99.00th=[51119], 99.50th=[52691], 99.90th=[90702], 99.95th=[90702], 00:35:15.379 | 99.99th=[90702] 00:35:15.379 bw ( KiB/s): min=26624, max=41472, per=39.50%, avg=35129.70, stdev=4689.82, samples=10 00:35:15.379 iops : min= 208, max= 324, avg=274.40, stdev=36.67, samples=10 00:35:15.379 lat (msec) : 10=50.15%, 20=47.89%, 50=0.73%, 100=1.24% 00:35:15.379 cpu : usr=94.39%, sys=5.37%, ctx=11, majf=0, minf=105 00:35:15.379 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.379 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.379 00:35:15.379 Run status group 0 (all jobs): 00:35:15.379 READ: bw=86.9MiB/s (91.1MB/s), 23.4MiB/s-34.0MiB/s (24.6MB/s-35.7MB/s), io=438MiB (460MB), run=5043-5047msec 00:35:15.379 17:45:23 -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:15.379 17:45:23 -- target/dif.sh@43 -- # local sub 00:35:15.379 17:45:23 -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.379 17:45:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.379 17:45:23 -- target/dif.sh@36 -- # local sub_id=0 00:35:15.379 17:45:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.379 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.379 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.379 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.379 17:45:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.379 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.379 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.379 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.379 17:45:23 -- target/dif.sh@109 -- # NULL_DIF=2 00:35:15.379 17:45:23 -- target/dif.sh@109 -- # bs=4k 00:35:15.379 17:45:23 -- target/dif.sh@109 -- # numjobs=8 00:35:15.379 17:45:23 -- target/dif.sh@109 -- # iodepth=16 00:35:15.379 17:45:23 -- target/dif.sh@109 -- # runtime= 00:35:15.379 17:45:23 -- target/dif.sh@109 -- # files=2 00:35:15.379 17:45:23 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:15.379 17:45:23 -- target/dif.sh@28 -- # local sub 00:35:15.379 17:45:23 -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.379 17:45:23 -- target/dif.sh@31 -- # create_subsystem 0 00:35:15.379 17:45:23 -- target/dif.sh@18 -- # local sub_id=0 00:35:15.379 17:45:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:15.379 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.379 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.379 bdev_null0 00:35:15.379 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.379 17:45:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:15.379 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.379 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.379 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.379 17:45:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:15.379 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.379 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.379 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.379 17:45:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.379 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.379 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.379 [2024-10-13 17:45:23.687141] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.379 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.379 17:45:23 -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.380 17:45:23 -- target/dif.sh@31 -- # create_subsystem 1 00:35:15.380 17:45:23 -- target/dif.sh@18 -- # local sub_id=1 00:35:15.380 17:45:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 bdev_null1 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.380 17:45:23 -- target/dif.sh@31 -- # create_subsystem 2 00:35:15.380 17:45:23 -- target/dif.sh@18 -- # local sub_id=2 00:35:15.380 17:45:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 bdev_null2 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:15.380 17:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.380 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 17:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.380 17:45:23 -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:15.380 17:45:23 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:15.380 17:45:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:15.380 17:45:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.380 17:45:23 -- nvmf/common.sh@520 -- # config=() 00:35:15.380 17:45:23 -- nvmf/common.sh@520 -- # local subsystem config 00:35:15.380 17:45:23 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.380 17:45:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:15.380 17:45:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:15.380 17:45:23 -- target/dif.sh@82 -- # gen_fio_conf 00:35:15.380 17:45:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:15.380 { 00:35:15.380 "params": { 00:35:15.380 "name": "Nvme$subsystem", 00:35:15.380 "trtype": "$TEST_TRANSPORT", 00:35:15.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.380 "adrfam": "ipv4", 00:35:15.380 "trsvcid": "$NVMF_PORT", 00:35:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.380 "hdgst": ${hdgst:-false}, 00:35:15.380 "ddgst": ${ddgst:-false} 00:35:15.380 }, 00:35:15.380 "method": "bdev_nvme_attach_controller" 00:35:15.380 } 00:35:15.380 EOF 00:35:15.380 )") 00:35:15.380 17:45:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:15.380 17:45:23 -- target/dif.sh@54 -- # local file 00:35:15.380 17:45:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:15.380 17:45:23 -- target/dif.sh@56 -- # cat 00:35:15.380 17:45:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.380 17:45:23 -- common/autotest_common.sh@1320 -- # shift 00:35:15.380 17:45:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:15.380 17:45:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.380 17:45:23 -- nvmf/common.sh@542 -- # cat 00:35:15.380 17:45:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.380 17:45:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:15.380 17:45:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:15.380 17:45:23 -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.380 17:45:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:15.380 17:45:23 -- target/dif.sh@73 -- # cat 00:35:15.380 17:45:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:15.380 17:45:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:15.380 { 00:35:15.380 "params": { 00:35:15.380 "name": "Nvme$subsystem", 00:35:15.380 "trtype": "$TEST_TRANSPORT", 00:35:15.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.380 "adrfam": "ipv4", 00:35:15.380 "trsvcid": "$NVMF_PORT", 00:35:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.380 "hdgst": ${hdgst:-false}, 00:35:15.380 "ddgst": ${ddgst:-false} 00:35:15.380 }, 00:35:15.380 "method": "bdev_nvme_attach_controller" 00:35:15.380 } 00:35:15.380 EOF 00:35:15.380 )") 00:35:15.380 17:45:23 -- target/dif.sh@72 -- # (( file++ )) 00:35:15.380 17:45:23 -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.380 17:45:23 -- target/dif.sh@73 -- # cat 00:35:15.380 17:45:23 -- nvmf/common.sh@542 -- # cat 00:35:15.380 17:45:23 -- target/dif.sh@72 -- # (( file++ )) 00:35:15.380 17:45:23 -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.380 17:45:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:15.380 17:45:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:15.380 { 00:35:15.380 "params": { 00:35:15.380 "name": "Nvme$subsystem", 00:35:15.380 "trtype": "$TEST_TRANSPORT", 00:35:15.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.380 "adrfam": "ipv4", 00:35:15.380 "trsvcid": "$NVMF_PORT", 00:35:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.380 "hdgst": ${hdgst:-false}, 00:35:15.380 "ddgst": ${ddgst:-false} 00:35:15.380 }, 00:35:15.380 "method": "bdev_nvme_attach_controller" 00:35:15.380 } 00:35:15.380 EOF 00:35:15.380 )") 00:35:15.380 17:45:23 -- nvmf/common.sh@542 -- # cat 00:35:15.380 17:45:23 -- nvmf/common.sh@544 -- # jq . 00:35:15.380 17:45:23 -- nvmf/common.sh@545 -- # IFS=, 00:35:15.380 17:45:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:15.380 "params": { 00:35:15.380 "name": "Nvme0", 00:35:15.380 "trtype": "tcp", 00:35:15.380 "traddr": "10.0.0.2", 00:35:15.380 "adrfam": "ipv4", 00:35:15.380 "trsvcid": "4420", 00:35:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.380 "hdgst": false, 00:35:15.380 "ddgst": false 00:35:15.380 }, 00:35:15.380 "method": "bdev_nvme_attach_controller" 00:35:15.380 },{ 00:35:15.380 "params": { 00:35:15.380 "name": "Nvme1", 00:35:15.380 "trtype": "tcp", 00:35:15.380 "traddr": "10.0.0.2", 00:35:15.380 "adrfam": "ipv4", 00:35:15.380 "trsvcid": "4420", 00:35:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:15.380 "hdgst": false, 00:35:15.380 "ddgst": false 00:35:15.380 }, 00:35:15.380 "method": "bdev_nvme_attach_controller" 00:35:15.380 },{ 00:35:15.380 "params": { 00:35:15.380 "name": "Nvme2", 00:35:15.380 "trtype": "tcp", 00:35:15.380 "traddr": "10.0.0.2", 00:35:15.380 "adrfam": "ipv4", 00:35:15.380 "trsvcid": "4420", 00:35:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:15.380 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:15.380 "hdgst": false, 00:35:15.380 "ddgst": false 00:35:15.380 }, 00:35:15.380 "method": "bdev_nvme_attach_controller" 00:35:15.380 }' 00:35:15.380 17:45:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:15.380 17:45:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:15.381 17:45:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.381 17:45:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.381 17:45:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:15.381 17:45:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:15.381 17:45:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:15.381 17:45:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:15.381 17:45:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:15.381 17:45:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.968 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:15.968 ... 00:35:15.968 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:15.968 ... 00:35:15.968 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:15.968 ... 00:35:15.968 fio-3.35 00:35:15.968 Starting 24 threads 00:35:15.968 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.911 [2024-10-13 17:45:25.087982] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:16.911 [2024-10-13 17:45:25.088026] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:26.913 00:35:26.913 filename0: (groupid=0, jobs=1): err= 0: pid=3444558: Sun Oct 13 17:45:35 2024 00:35:26.913 read: IOPS=550, BW=2200KiB/s (2253kB/s)(21.5MiB/10007msec) 00:35:26.913 slat (nsec): min=5506, max=78087, avg=9584.16, stdev=7670.27 00:35:26.913 clat (usec): min=2075, max=33007, avg=29006.15, stdev=5170.58 00:35:26.913 lat (usec): min=2108, max=33013, avg=29015.73, stdev=5170.16 00:35:26.913 clat percentiles (usec): 00:35:26.913 | 1.00th=[ 2868], 5.00th=[18744], 10.00th=[21627], 20.00th=[30016], 00:35:26.913 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.913 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[31851], 00:35:26.913 | 99.00th=[32375], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:35:26.913 | 99.99th=[32900] 00:35:26.913 bw ( KiB/s): min= 2048, max= 2944, per=4.43%, avg=2195.95, stdev=201.17, samples=19 00:35:26.913 iops : min= 512, max= 736, avg=548.95, stdev=50.27, samples=19 00:35:26.913 lat (msec) : 4=1.74%, 10=0.58%, 20=4.07%, 50=93.60% 00:35:26.913 cpu : usr=98.58%, sys=0.94%, ctx=74, majf=0, minf=9 00:35:26.913 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.913 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.913 issued rwts: total=5504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.913 filename0: (groupid=0, jobs=1): err= 0: pid=3444559: Sun Oct 13 17:45:35 2024 00:35:26.913 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10039msec) 00:35:26.913 slat (nsec): min=5533, max=92531, avg=13634.33, stdev=11428.78 00:35:26.913 clat (usec): min=22234, max=79405, avg=30879.28, stdev=2839.44 00:35:26.913 lat (usec): min=22240, max=79412, avg=30892.92, stdev=2838.69 00:35:26.913 clat percentiles (usec): 00:35:26.913 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.913 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:26.913 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:35:26.913 | 99.00th=[32637], 99.50th=[33162], 99.90th=[79168], 99.95th=[79168], 00:35:26.913 | 99.99th=[79168] 00:35:26.913 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2067.15, stdev=62.13, samples=20 00:35:26.913 iops : min= 480, max= 544, avg=516.75, stdev=15.47, samples=20 00:35:26.913 lat (msec) : 50=99.69%, 100=0.31% 00:35:26.913 cpu : usr=98.98%, sys=0.73%, ctx=74, majf=0, minf=9 00:35:26.913 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.913 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.913 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.913 filename0: (groupid=0, jobs=1): err= 0: pid=3444561: Sun Oct 13 17:45:35 2024 00:35:26.913 read: IOPS=512, BW=2052KiB/s (2101kB/s)(20.2MiB/10060msec) 00:35:26.913 slat (nsec): min=5498, max=98715, avg=15642.41, stdev=12804.43 00:35:26.913 clat (usec): min=9100, max=99735, avg=31080.12, stdev=5716.85 00:35:26.913 lat (usec): min=9108, max=99741, avg=31095.76, stdev=5717.33 00:35:26.913 clat percentiles (msec): 00:35:26.913 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 31], 00:35:26.913 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:35:26.913 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 39], 00:35:26.913 | 99.00th=[ 55], 99.50th=[ 55], 99.90th=[ 101], 99.95th=[ 101], 00:35:26.913 | 99.99th=[ 101] 00:35:26.913 bw ( KiB/s): min= 1836, max= 2288, per=4.15%, avg=2057.40, stdev=93.56, samples=20 00:35:26.913 iops : min= 459, max= 572, avg=514.35, stdev=23.39, samples=20 00:35:26.913 lat (msec) : 10=0.14%, 20=2.07%, 50=96.05%, 100=1.74% 00:35:26.913 cpu : usr=99.00%, sys=0.73%, ctx=14, majf=0, minf=9 00:35:26.913 IO depths : 1=1.1%, 2=2.7%, 4=8.4%, 8=73.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:35:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.913 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.913 issued rwts: total=5160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.913 filename0: (groupid=0, jobs=1): err= 0: pid=3444562: Sun Oct 13 17:45:35 2024 00:35:26.913 read: IOPS=514, BW=2060KiB/s (2109kB/s)(20.2MiB/10035msec) 00:35:26.913 slat (nsec): min=5500, max=96516, avg=15821.62, stdev=12651.68 00:35:26.913 clat (msec): min=15, max=101, avg=30.96, stdev= 4.09 00:35:26.913 lat (msec): min=15, max=101, avg=30.98, stdev= 4.09 00:35:26.913 clat percentiles (msec): 00:35:26.913 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:35:26.913 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:35:26.913 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 32], 95.00th=[ 33], 00:35:26.913 | 99.00th=[ 43], 99.50th=[ 50], 99.90th=[ 102], 99.95th=[ 102], 00:35:26.913 | 99.99th=[ 102] 00:35:26.914 bw ( KiB/s): min= 1872, max= 2160, per=4.16%, avg=2062.75, stdev=66.96, samples=20 00:35:26.914 iops : min= 468, max= 540, avg=515.65, stdev=16.79, samples=20 00:35:26.914 lat (msec) : 20=0.62%, 50=99.03%, 100=0.23%, 250=0.12% 00:35:26.914 cpu : usr=97.81%, sys=1.39%, ctx=691, majf=0, minf=9 00:35:26.914 IO depths : 1=0.2%, 2=2.9%, 4=11.1%, 8=70.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=91.5%, 8=5.9%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename0: (groupid=0, jobs=1): err= 0: pid=3444563: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=514, BW=2058KiB/s (2108kB/s)(20.2MiB/10043msec) 00:35:26.914 slat (nsec): min=5756, max=86938, avg=24328.45, stdev=14105.63 00:35:26.914 clat (usec): min=22249, max=94303, avg=30879.69, stdev=3618.42 00:35:26.914 lat (usec): min=22254, max=94327, avg=30904.01, stdev=3617.47 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.914 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.914 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[32113], 00:35:26.914 | 99.00th=[32900], 99.50th=[35390], 99.90th=[93848], 99.95th=[93848], 00:35:26.914 | 99.99th=[93848] 00:35:26.914 bw ( KiB/s): min= 1792, max= 2176, per=4.15%, avg=2060.55, stdev=81.65, samples=20 00:35:26.914 iops : min= 448, max= 544, avg=515.10, stdev=20.36, samples=20 00:35:26.914 lat (msec) : 50=99.69%, 100=0.31% 00:35:26.914 cpu : usr=98.94%, sys=0.78%, ctx=21, majf=0, minf=9 00:35:26.914 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename0: (groupid=0, jobs=1): err= 0: pid=3444564: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=513, BW=2054KiB/s (2104kB/s)(20.1MiB/10032msec) 00:35:26.914 slat (usec): min=5, max=105, avg=27.07, stdev=18.19 00:35:26.914 clat (usec): min=21412, max=94263, avg=30881.70, stdev=3866.50 00:35:26.914 lat (usec): min=21418, max=94275, avg=30908.76, stdev=3865.67 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30016], 00:35:26.914 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:26.914 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31589], 95.00th=[32113], 00:35:26.914 | 99.00th=[32900], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:35:26.914 | 99.99th=[93848] 00:35:26.914 bw ( KiB/s): min= 1792, max= 2176, per=4.14%, avg=2054.15, stdev=87.88, samples=20 00:35:26.914 iops : min= 448, max= 544, avg=513.50, stdev=21.97, samples=20 00:35:26.914 lat (msec) : 50=99.38%, 100=0.62% 00:35:26.914 cpu : usr=98.87%, sys=0.71%, ctx=53, majf=0, minf=9 00:35:26.914 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename0: (groupid=0, jobs=1): err= 0: pid=3444565: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=515, BW=2060KiB/s (2110kB/s)(20.2MiB/10034msec) 00:35:26.914 slat (usec): min=5, max=118, avg=18.77, stdev=20.25 00:35:26.914 clat (usec): min=19917, max=79574, avg=30914.13, stdev=3080.00 00:35:26.914 lat (usec): min=19924, max=79586, avg=30932.90, stdev=3078.55 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.914 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.914 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31851], 95.00th=[32375], 00:35:26.914 | 99.00th=[33162], 99.50th=[50070], 99.90th=[79168], 99.95th=[79168], 00:35:26.914 | 99.99th=[79168] 00:35:26.914 bw ( KiB/s): min= 1916, max= 2176, per=4.15%, avg=2060.60, stdev=71.14, samples=20 00:35:26.914 iops : min= 479, max= 544, avg=515.15, stdev=17.79, samples=20 00:35:26.914 lat (msec) : 20=0.27%, 50=99.11%, 100=0.62% 00:35:26.914 cpu : usr=98.85%, sys=0.82%, ctx=54, majf=0, minf=9 00:35:26.914 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename0: (groupid=0, jobs=1): err= 0: pid=3444566: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=514, BW=2057KiB/s (2107kB/s)(20.2MiB/10067msec) 00:35:26.914 slat (usec): min=5, max=124, avg=15.23, stdev=12.59 00:35:26.914 clat (usec): min=10830, max=90771, avg=30914.01, stdev=6548.77 00:35:26.914 lat (usec): min=10840, max=90778, avg=30929.24, stdev=6550.08 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[17433], 5.00th=[20317], 10.00th=[23200], 20.00th=[27132], 00:35:26.914 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.914 | 70.00th=[31327], 80.00th=[32375], 90.00th=[38536], 95.00th=[43779], 00:35:26.914 | 99.00th=[48497], 99.50th=[51119], 99.90th=[90702], 99.95th=[90702], 00:35:26.914 | 99.99th=[90702] 00:35:26.914 bw ( KiB/s): min= 1664, max= 2344, per=4.17%, avg=2066.95, stdev=150.69, samples=20 00:35:26.914 iops : min= 416, max= 586, avg=516.70, stdev=37.68, samples=20 00:35:26.914 lat (msec) : 20=4.60%, 50=94.63%, 100=0.77% 00:35:26.914 cpu : usr=98.90%, sys=0.82%, ctx=9, majf=0, minf=11 00:35:26.914 IO depths : 1=3.1%, 2=6.3%, 4=14.9%, 8=65.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=91.6%, 8=3.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename1: (groupid=0, jobs=1): err= 0: pid=3444567: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.2MiB/10060msec) 00:35:26.914 slat (nsec): min=5533, max=98417, avg=27269.16, stdev=17756.51 00:35:26.914 clat (usec): min=26895, max=94397, avg=30897.70, stdev=3809.29 00:35:26.914 lat (usec): min=26904, max=94404, avg=30924.97, stdev=3807.65 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30016], 00:35:26.914 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30802], 00:35:26.914 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31589], 95.00th=[32113], 00:35:26.914 | 99.00th=[32900], 99.50th=[52167], 99.90th=[93848], 99.95th=[94897], 00:35:26.914 | 99.99th=[94897] 00:35:26.914 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2060.80, stdev=70.72, samples=20 00:35:26.914 iops : min= 480, max= 544, avg=515.20, stdev=17.68, samples=20 00:35:26.914 lat (msec) : 50=99.38%, 100=0.62% 00:35:26.914 cpu : usr=99.16%, sys=0.56%, ctx=11, majf=0, minf=9 00:35:26.914 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename1: (groupid=0, jobs=1): err= 0: pid=3444568: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=514, BW=2060KiB/s (2109kB/s)(20.2MiB/10037msec) 00:35:26.914 slat (nsec): min=5516, max=74611, avg=16886.26, stdev=11962.64 00:35:26.914 clat (usec): min=16228, max=86649, avg=30909.88, stdev=3192.91 00:35:26.914 lat (usec): min=16235, max=86686, avg=30926.77, stdev=3192.53 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:35:26.914 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.914 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[32113], 00:35:26.914 | 99.00th=[32637], 99.50th=[32900], 99.90th=[86508], 99.95th=[86508], 00:35:26.914 | 99.99th=[86508] 00:35:26.914 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2061.00, stdev=70.69, samples=20 00:35:26.914 iops : min= 480, max= 544, avg=515.25, stdev=17.67, samples=20 00:35:26.914 lat (msec) : 20=0.04%, 50=99.65%, 100=0.31% 00:35:26.914 cpu : usr=99.02%, sys=0.70%, ctx=14, majf=0, minf=9 00:35:26.914 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename1: (groupid=0, jobs=1): err= 0: pid=3444569: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=514, BW=2060KiB/s (2109kB/s)(20.2MiB/10064msec) 00:35:26.914 slat (nsec): min=5496, max=82731, avg=17511.64, stdev=13113.29 00:35:26.914 clat (usec): min=11943, max=95699, avg=30925.48, stdev=6192.98 00:35:26.914 lat (usec): min=11952, max=95706, avg=30942.99, stdev=6194.19 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[18744], 5.00th=[21103], 10.00th=[24773], 20.00th=[29230], 00:35:26.914 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.914 | 70.00th=[31065], 80.00th=[31851], 90.00th=[38011], 95.00th=[40633], 00:35:26.914 | 99.00th=[49546], 99.50th=[54264], 99.90th=[93848], 99.95th=[93848], 00:35:26.914 | 99.99th=[95945] 00:35:26.914 bw ( KiB/s): min= 1792, max= 2336, per=4.17%, avg=2066.60, stdev=126.75, samples=20 00:35:26.914 iops : min= 448, max= 584, avg=516.65, stdev=31.69, samples=20 00:35:26.914 lat (msec) : 20=2.28%, 50=96.80%, 100=0.93% 00:35:26.914 cpu : usr=98.98%, sys=0.74%, ctx=45, majf=0, minf=9 00:35:26.914 IO depths : 1=3.9%, 2=7.9%, 4=17.5%, 8=61.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:26.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 complete : 0=0.0%, 4=92.2%, 8=2.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.914 issued rwts: total=5182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.914 filename1: (groupid=0, jobs=1): err= 0: pid=3444570: Sun Oct 13 17:45:35 2024 00:35:26.914 read: IOPS=514, BW=2059KiB/s (2108kB/s)(20.1MiB/10009msec) 00:35:26.914 slat (nsec): min=5508, max=59387, avg=12313.27, stdev=8410.63 00:35:26.914 clat (usec): min=21213, max=79720, avg=30960.19, stdev=3007.23 00:35:26.914 lat (usec): min=21240, max=79745, avg=30972.50, stdev=3007.37 00:35:26.914 clat percentiles (usec): 00:35:26.914 | 1.00th=[29230], 5.00th=[29230], 10.00th=[30016], 20.00th=[30278], 00:35:26.914 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:26.914 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:35:26.915 | 99.00th=[33162], 99.50th=[49546], 99.90th=[79168], 99.95th=[79168], 00:35:26.915 | 99.99th=[80217] 00:35:26.915 bw ( KiB/s): min= 1916, max= 2176, per=4.17%, avg=2067.74, stdev=64.80, samples=19 00:35:26.915 iops : min= 479, max= 544, avg=516.89, stdev=16.22, samples=19 00:35:26.915 lat (msec) : 50=99.69%, 100=0.31% 00:35:26.915 cpu : usr=98.88%, sys=0.68%, ctx=85, majf=0, minf=9 00:35:26.915 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename1: (groupid=0, jobs=1): err= 0: pid=3444571: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=514, BW=2058KiB/s (2107kB/s)(20.2MiB/10045msec) 00:35:26.915 slat (nsec): min=5537, max=72013, avg=14629.28, stdev=10893.15 00:35:26.915 clat (usec): min=28924, max=86805, avg=30979.30, stdev=3210.64 00:35:26.915 lat (usec): min=28933, max=86812, avg=30993.93, stdev=3210.40 00:35:26.915 clat percentiles (usec): 00:35:26.915 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30278], 20.00th=[30278], 00:35:26.915 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.915 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[32113], 00:35:26.915 | 99.00th=[32900], 99.50th=[39584], 99.90th=[86508], 99.95th=[86508], 00:35:26.915 | 99.99th=[86508] 00:35:26.915 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2060.55, stdev=57.31, samples=20 00:35:26.915 iops : min= 480, max= 544, avg=515.10, stdev=14.34, samples=20 00:35:26.915 lat (msec) : 50=99.69%, 100=0.31% 00:35:26.915 cpu : usr=99.10%, sys=0.62%, ctx=13, majf=0, minf=9 00:35:26.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename1: (groupid=0, jobs=1): err= 0: pid=3444573: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10041msec) 00:35:26.915 slat (nsec): min=5527, max=99716, avg=20525.54, stdev=16743.61 00:35:26.915 clat (usec): min=20022, max=79463, avg=30826.50, stdev=2864.99 00:35:26.915 lat (usec): min=20050, max=79470, avg=30847.03, stdev=2863.44 00:35:26.915 clat percentiles (usec): 00:35:26.915 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.915 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.915 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:35:26.915 | 99.00th=[32900], 99.50th=[33162], 99.90th=[79168], 99.95th=[79168], 00:35:26.915 | 99.99th=[79168] 00:35:26.915 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2066.70, stdev=62.28, samples=20 00:35:26.915 iops : min= 480, max= 544, avg=516.60, stdev=15.52, samples=20 00:35:26.915 lat (msec) : 50=99.69%, 100=0.31% 00:35:26.915 cpu : usr=99.10%, sys=0.61%, ctx=12, majf=0, minf=9 00:35:26.915 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename1: (groupid=0, jobs=1): err= 0: pid=3444574: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=549, BW=2197KiB/s (2250kB/s)(21.6MiB/10085msec) 00:35:26.915 slat (nsec): min=5506, max=94979, avg=15328.99, stdev=13334.60 00:35:26.915 clat (msec): min=2, max=102, avg=29.01, stdev= 7.96 00:35:26.915 lat (msec): min=2, max=102, avg=29.02, stdev= 7.97 00:35:26.915 clat percentiles (msec): 00:35:26.915 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 24], 00:35:26.915 | 30.00th=[ 27], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:35:26.915 | 70.00th=[ 31], 80.00th=[ 32], 90.00th=[ 36], 95.00th=[ 42], 00:35:26.915 | 99.00th=[ 51], 99.50th=[ 54], 99.90th=[ 103], 99.95th=[ 103], 00:35:26.915 | 99.99th=[ 103] 00:35:26.915 bw ( KiB/s): min= 1888, max= 2688, per=4.45%, avg=2209.35, stdev=191.35, samples=20 00:35:26.915 iops : min= 472, max= 672, avg=552.30, stdev=47.82, samples=20 00:35:26.915 lat (msec) : 4=1.16%, 10=0.58%, 20=5.09%, 50=92.17%, 100=0.72% 00:35:26.915 lat (msec) : 250=0.29% 00:35:26.915 cpu : usr=98.76%, sys=0.87%, ctx=83, majf=0, minf=9 00:35:26.915 IO depths : 1=2.5%, 2=5.2%, 4=13.6%, 8=68.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename1: (groupid=0, jobs=1): err= 0: pid=3444575: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=513, BW=2055KiB/s (2105kB/s)(20.2MiB/10058msec) 00:35:26.915 slat (nsec): min=5152, max=98367, avg=27721.64, stdev=16922.63 00:35:26.915 clat (usec): min=20904, max=94179, avg=30876.76, stdev=3783.15 00:35:26.915 lat (usec): min=20914, max=94215, avg=30904.48, stdev=3782.01 00:35:26.915 clat percentiles (usec): 00:35:26.915 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30016], 00:35:26.915 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30802], 00:35:26.915 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31589], 95.00th=[32113], 00:35:26.915 | 99.00th=[32900], 99.50th=[50594], 99.90th=[93848], 99.95th=[93848], 00:35:26.915 | 99.99th=[93848] 00:35:26.915 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2060.95, stdev=70.41, samples=20 00:35:26.915 iops : min= 480, max= 544, avg=515.20, stdev=17.68, samples=20 00:35:26.915 lat (msec) : 50=99.38%, 100=0.62% 00:35:26.915 cpu : usr=98.66%, sys=0.92%, ctx=40, majf=0, minf=9 00:35:26.915 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename2: (groupid=0, jobs=1): err= 0: pid=3444576: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.4MiB/10065msec) 00:35:26.915 slat (usec): min=5, max=144, avg=10.46, stdev= 9.19 00:35:26.915 clat (usec): min=16930, max=94297, avg=30690.87, stdev=3980.89 00:35:26.915 lat (usec): min=16935, max=94304, avg=30701.33, stdev=3980.55 00:35:26.915 clat percentiles (usec): 00:35:26.915 | 1.00th=[21365], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.915 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.915 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:35:26.915 | 99.00th=[32637], 99.50th=[32900], 99.90th=[93848], 99.95th=[93848], 00:35:26.915 | 99.99th=[93848] 00:35:26.915 bw ( KiB/s): min= 2048, max= 2304, per=4.21%, avg=2089.20, stdev=72.33, samples=20 00:35:26.915 iops : min= 512, max= 576, avg=522.30, stdev=18.08, samples=20 00:35:26.915 lat (msec) : 20=0.92%, 50=98.78%, 100=0.31% 00:35:26.915 cpu : usr=98.68%, sys=0.78%, ctx=115, majf=0, minf=9 00:35:26.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename2: (groupid=0, jobs=1): err= 0: pid=3444577: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10041msec) 00:35:26.915 slat (usec): min=5, max=105, avg=22.75, stdev=19.15 00:35:26.915 clat (usec): min=22358, max=79522, avg=30791.10, stdev=2859.02 00:35:26.915 lat (usec): min=22365, max=79529, avg=30813.85, stdev=2857.68 00:35:26.915 clat percentiles (usec): 00:35:26.915 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29754], 20.00th=[30016], 00:35:26.915 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.915 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:35:26.915 | 99.00th=[32900], 99.50th=[33162], 99.90th=[79168], 99.95th=[79168], 00:35:26.915 | 99.99th=[79168] 00:35:26.915 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2066.70, stdev=62.28, samples=20 00:35:26.915 iops : min= 480, max= 544, avg=516.60, stdev=15.52, samples=20 00:35:26.915 lat (msec) : 50=99.69%, 100=0.31% 00:35:26.915 cpu : usr=99.05%, sys=0.56%, ctx=91, majf=0, minf=9 00:35:26.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename2: (groupid=0, jobs=1): err= 0: pid=3444578: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=513, BW=2055KiB/s (2105kB/s)(20.2MiB/10058msec) 00:35:26.915 slat (nsec): min=4268, max=99170, avg=22869.08, stdev=14811.92 00:35:26.915 clat (usec): min=28504, max=94199, avg=30947.77, stdev=3754.31 00:35:26.915 lat (usec): min=28518, max=94223, avg=30970.64, stdev=3753.28 00:35:26.915 clat percentiles (usec): 00:35:26.915 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.915 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.915 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31851], 95.00th=[32113], 00:35:26.915 | 99.00th=[32900], 99.50th=[50594], 99.90th=[93848], 99.95th=[93848], 00:35:26.915 | 99.99th=[93848] 00:35:26.915 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2060.95, stdev=70.41, samples=20 00:35:26.915 iops : min= 480, max= 544, avg=515.20, stdev=17.68, samples=20 00:35:26.915 lat (msec) : 50=99.38%, 100=0.62% 00:35:26.915 cpu : usr=98.24%, sys=1.04%, ctx=274, majf=0, minf=9 00:35:26.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.915 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.915 filename2: (groupid=0, jobs=1): err= 0: pid=3444579: Sun Oct 13 17:45:35 2024 00:35:26.915 read: IOPS=513, BW=2054KiB/s (2104kB/s)(20.1MiB/10035msec) 00:35:26.916 slat (nsec): min=5494, max=97400, avg=17729.72, stdev=17176.12 00:35:26.916 clat (msec): min=12, max=100, avg=31.02, stdev= 3.82 00:35:26.916 lat (msec): min=12, max=100, avg=31.04, stdev= 3.82 00:35:26.916 clat percentiles (msec): 00:35:26.916 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:35:26.916 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:35:26.916 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 32], 95.00th=[ 33], 00:35:26.916 | 99.00th=[ 41], 99.50th=[ 50], 99.90th=[ 102], 99.95th=[ 102], 00:35:26.916 | 99.99th=[ 102] 00:35:26.916 bw ( KiB/s): min= 1872, max= 2160, per=4.15%, avg=2056.55, stdev=65.20, samples=20 00:35:26.916 iops : min= 468, max= 540, avg=514.10, stdev=16.28, samples=20 00:35:26.916 lat (msec) : 20=0.10%, 50=99.59%, 100=0.19%, 250=0.12% 00:35:26.916 cpu : usr=98.56%, sys=0.86%, ctx=182, majf=0, minf=9 00:35:26.916 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=72.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:35:26.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 complete : 0=0.0%, 4=91.0%, 8=7.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 issued rwts: total=5154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.916 filename2: (groupid=0, jobs=1): err= 0: pid=3444580: Sun Oct 13 17:45:35 2024 00:35:26.916 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.2MiB/10061msec) 00:35:26.916 slat (usec): min=5, max=112, avg=23.71, stdev=18.09 00:35:26.916 clat (usec): min=28246, max=94047, avg=30947.53, stdev=3814.52 00:35:26.916 lat (usec): min=28254, max=94054, avg=30971.24, stdev=3812.55 00:35:26.916 clat percentiles (usec): 00:35:26.916 | 1.00th=[28967], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:26.916 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.916 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31851], 95.00th=[32113], 00:35:26.916 | 99.00th=[32900], 99.50th=[53740], 99.90th=[93848], 99.95th=[93848], 00:35:26.916 | 99.99th=[93848] 00:35:26.916 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2061.15, stdev=70.38, samples=20 00:35:26.916 iops : min= 480, max= 544, avg=515.25, stdev=17.67, samples=20 00:35:26.916 lat (msec) : 50=99.38%, 100=0.62% 00:35:26.916 cpu : usr=98.85%, sys=0.84%, ctx=40, majf=0, minf=9 00:35:26.916 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.916 filename2: (groupid=0, jobs=1): err= 0: pid=3444581: Sun Oct 13 17:45:35 2024 00:35:26.916 read: IOPS=536, BW=2146KiB/s (2198kB/s)(21.1MiB/10048msec) 00:35:26.916 slat (usec): min=5, max=101, avg=20.27, stdev=17.47 00:35:26.916 clat (usec): min=14682, max=95681, avg=29645.20, stdev=5735.78 00:35:26.916 lat (usec): min=14713, max=95687, avg=29665.47, stdev=5738.10 00:35:26.916 clat percentiles (usec): 00:35:26.916 | 1.00th=[18482], 5.00th=[20055], 10.00th=[22152], 20.00th=[27132], 00:35:26.916 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:26.916 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31851], 95.00th=[35914], 00:35:26.916 | 99.00th=[45876], 99.50th=[49021], 99.90th=[93848], 99.95th=[93848], 00:35:26.916 | 99.99th=[95945] 00:35:26.916 bw ( KiB/s): min= 1968, max= 2352, per=4.34%, avg=2150.40, stdev=98.69, samples=20 00:35:26.916 iops : min= 492, max= 588, avg=537.60, stdev=24.67, samples=20 00:35:26.916 lat (msec) : 20=4.08%, 50=95.55%, 100=0.37% 00:35:26.916 cpu : usr=98.90%, sys=0.81%, ctx=38, majf=0, minf=9 00:35:26.916 IO depths : 1=1.6%, 2=5.8%, 4=18.3%, 8=62.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:26.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 complete : 0=0.0%, 4=92.4%, 8=2.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.916 filename2: (groupid=0, jobs=1): err= 0: pid=3444583: Sun Oct 13 17:45:35 2024 00:35:26.916 read: IOPS=514, BW=2059KiB/s (2109kB/s)(20.2MiB/10038msec) 00:35:26.916 slat (nsec): min=5515, max=73441, avg=13165.20, stdev=9215.15 00:35:26.916 clat (usec): min=9548, max=86798, avg=30968.96, stdev=3260.95 00:35:26.916 lat (usec): min=9556, max=86807, avg=30982.13, stdev=3261.08 00:35:26.916 clat percentiles (usec): 00:35:26.916 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30278], 20.00th=[30278], 00:35:26.916 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:26.916 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[32113], 00:35:26.916 | 99.00th=[32900], 99.50th=[32900], 99.90th=[86508], 99.95th=[86508], 00:35:26.916 | 99.99th=[86508] 00:35:26.916 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2060.55, stdev=70.78, samples=20 00:35:26.916 iops : min= 480, max= 544, avg=515.10, stdev=17.70, samples=20 00:35:26.916 lat (msec) : 10=0.04%, 20=0.08%, 50=99.54%, 100=0.35% 00:35:26.916 cpu : usr=98.63%, sys=0.89%, ctx=100, majf=0, minf=9 00:35:26.916 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:26.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.916 filename2: (groupid=0, jobs=1): err= 0: pid=3444584: Sun Oct 13 17:45:35 2024 00:35:26.916 read: IOPS=514, BW=2056KiB/s (2105kB/s)(20.1MiB/10035msec) 00:35:26.916 slat (nsec): min=5495, max=79875, avg=14964.42, stdev=11474.00 00:35:26.916 clat (usec): min=10233, max=94487, avg=31051.07, stdev=5672.39 00:35:26.916 lat (usec): min=10242, max=94499, avg=31066.03, stdev=5673.23 00:35:26.916 clat percentiles (usec): 00:35:26.916 | 1.00th=[12256], 5.00th=[24773], 10.00th=[29492], 20.00th=[30278], 00:35:26.916 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[31065], 00:35:26.916 | 70.00th=[31327], 80.00th=[31589], 90.00th=[32113], 95.00th=[38536], 00:35:26.916 | 99.00th=[49546], 99.50th=[55313], 99.90th=[94897], 99.95th=[94897], 00:35:26.916 | 99.99th=[94897] 00:35:26.916 bw ( KiB/s): min= 1840, max= 2219, per=4.15%, avg=2056.55, stdev=83.10, samples=20 00:35:26.916 iops : min= 460, max= 554, avg=514.10, stdev=20.70, samples=20 00:35:26.916 lat (msec) : 20=2.97%, 50=96.10%, 100=0.93% 00:35:26.916 cpu : usr=99.10%, sys=0.62%, ctx=27, majf=0, minf=9 00:35:26.916 IO depths : 1=0.1%, 2=1.0%, 4=5.9%, 8=76.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:35:26.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 complete : 0=0.0%, 4=90.9%, 8=6.7%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.916 issued rwts: total=5158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.916 00:35:26.916 Run status group 0 (all jobs): 00:35:26.916 READ: bw=48.4MiB/s (50.8MB/s), 2052KiB/s-2200KiB/s (2101kB/s-2253kB/s), io=488MiB (512MB), run=10007-10085msec 00:35:27.178 17:45:35 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:27.178 17:45:35 -- target/dif.sh@43 -- # local sub 00:35:27.178 17:45:35 -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.178 17:45:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:27.178 17:45:35 -- target/dif.sh@36 -- # local sub_id=0 00:35:27.178 17:45:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.178 17:45:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:27.178 17:45:35 -- target/dif.sh@36 -- # local sub_id=1 00:35:27.178 17:45:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.178 17:45:35 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:27.178 17:45:35 -- target/dif.sh@36 -- # local sub_id=2 00:35:27.178 17:45:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:27.178 17:45:35 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:27.178 17:45:35 -- target/dif.sh@115 -- # numjobs=2 00:35:27.178 17:45:35 -- target/dif.sh@115 -- # iodepth=8 00:35:27.178 17:45:35 -- target/dif.sh@115 -- # runtime=5 00:35:27.178 17:45:35 -- target/dif.sh@115 -- # files=1 00:35:27.178 17:45:35 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:27.178 17:45:35 -- target/dif.sh@28 -- # local sub 00:35:27.178 17:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.178 17:45:35 -- target/dif.sh@31 -- # create_subsystem 0 00:35:27.178 17:45:35 -- target/dif.sh@18 -- # local sub_id=0 00:35:27.178 17:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 bdev_null0 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 [2024-10-13 17:45:35.561899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.178 17:45:35 -- target/dif.sh@31 -- # create_subsystem 1 00:35:27.178 17:45:35 -- target/dif.sh@18 -- # local sub_id=1 00:35:27.178 17:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 bdev_null1 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.178 17:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.178 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:27.178 17:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.178 17:45:35 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:27.178 17:45:35 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:27.178 17:45:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:27.178 17:45:35 -- nvmf/common.sh@520 -- # config=() 00:35:27.178 17:45:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.178 17:45:35 -- nvmf/common.sh@520 -- # local subsystem config 00:35:27.178 17:45:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.178 17:45:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:27.178 17:45:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:27.178 { 00:35:27.178 "params": { 00:35:27.178 "name": "Nvme$subsystem", 00:35:27.178 "trtype": "$TEST_TRANSPORT", 00:35:27.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.178 "adrfam": "ipv4", 00:35:27.178 "trsvcid": "$NVMF_PORT", 00:35:27.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.178 "hdgst": ${hdgst:-false}, 00:35:27.178 "ddgst": ${ddgst:-false} 00:35:27.178 }, 00:35:27.178 "method": "bdev_nvme_attach_controller" 00:35:27.178 } 00:35:27.178 EOF 00:35:27.178 )") 00:35:27.178 17:45:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:27.178 17:45:35 -- target/dif.sh@82 -- # gen_fio_conf 00:35:27.178 17:45:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:27.178 17:45:35 -- target/dif.sh@54 -- # local file 00:35:27.178 17:45:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:27.178 17:45:35 -- target/dif.sh@56 -- # cat 00:35:27.178 17:45:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.178 17:45:35 -- common/autotest_common.sh@1320 -- # shift 00:35:27.178 17:45:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:27.178 17:45:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.178 17:45:35 -- nvmf/common.sh@542 -- # cat 00:35:27.178 17:45:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.178 17:45:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:27.178 17:45:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:27.179 17:45:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:27.179 17:45:35 -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.179 17:45:35 -- target/dif.sh@73 -- # cat 00:35:27.179 17:45:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:27.179 17:45:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:27.179 { 00:35:27.179 "params": { 00:35:27.179 "name": "Nvme$subsystem", 00:35:27.179 "trtype": "$TEST_TRANSPORT", 00:35:27.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.179 "adrfam": "ipv4", 00:35:27.179 "trsvcid": "$NVMF_PORT", 00:35:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.179 "hdgst": ${hdgst:-false}, 00:35:27.179 "ddgst": ${ddgst:-false} 00:35:27.179 }, 00:35:27.179 "method": "bdev_nvme_attach_controller" 00:35:27.179 } 00:35:27.179 EOF 00:35:27.179 )") 00:35:27.179 17:45:35 -- target/dif.sh@72 -- # (( file++ )) 00:35:27.179 17:45:35 -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.179 17:45:35 -- nvmf/common.sh@542 -- # cat 00:35:27.179 17:45:35 -- nvmf/common.sh@544 -- # jq . 00:35:27.179 17:45:35 -- nvmf/common.sh@545 -- # IFS=, 00:35:27.179 17:45:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:27.179 "params": { 00:35:27.179 "name": "Nvme0", 00:35:27.179 "trtype": "tcp", 00:35:27.179 "traddr": "10.0.0.2", 00:35:27.179 "adrfam": "ipv4", 00:35:27.179 "trsvcid": "4420", 00:35:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.179 "hdgst": false, 00:35:27.179 "ddgst": false 00:35:27.179 }, 00:35:27.179 "method": "bdev_nvme_attach_controller" 00:35:27.179 },{ 00:35:27.179 "params": { 00:35:27.179 "name": "Nvme1", 00:35:27.179 "trtype": "tcp", 00:35:27.179 "traddr": "10.0.0.2", 00:35:27.179 "adrfam": "ipv4", 00:35:27.179 "trsvcid": "4420", 00:35:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.179 "hdgst": false, 00:35:27.179 "ddgst": false 00:35:27.179 }, 00:35:27.179 "method": "bdev_nvme_attach_controller" 00:35:27.179 }' 00:35:27.179 17:45:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:27.179 17:45:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:27.179 17:45:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.179 17:45:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.179 17:45:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:27.179 17:45:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:27.179 17:45:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:27.179 17:45:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:27.179 17:45:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:27.179 17:45:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.773 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:27.773 ... 00:35:27.773 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:27.773 ... 00:35:27.773 fio-3.35 00:35:27.773 Starting 4 threads 00:35:27.773 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.033 [2024-10-13 17:45:36.479783] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:28.033 [2024-10-13 17:45:36.479834] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:33.470 00:35:33.470 filename0: (groupid=0, jobs=1): err= 0: pid=3446990: Sun Oct 13 17:45:41 2024 00:35:33.470 read: IOPS=2244, BW=17.5MiB/s (18.4MB/s)(87.7MiB/5003msec) 00:35:33.470 slat (nsec): min=5334, max=54262, avg=6037.91, stdev=1749.68 00:35:33.470 clat (usec): min=1730, max=6255, avg=3548.68, stdev=486.99 00:35:33.470 lat (usec): min=1736, max=6261, avg=3554.72, stdev=487.06 00:35:33.470 clat percentiles (usec): 00:35:33.470 | 1.00th=[ 2671], 5.00th=[ 2900], 10.00th=[ 3032], 20.00th=[ 3228], 00:35:33.470 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3589], 00:35:33.470 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3916], 95.00th=[ 4686], 00:35:33.470 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5932], 99.95th=[ 6063], 00:35:33.470 | 99.99th=[ 6259] 00:35:33.470 bw ( KiB/s): min=17792, max=18272, per=25.58%, avg=17941.33, stdev=171.77, samples=9 00:35:33.470 iops : min= 2224, max= 2284, avg=2242.67, stdev=21.47, samples=9 00:35:33.470 lat (msec) : 2=0.01%, 4=91.40%, 10=8.60% 00:35:33.470 cpu : usr=97.14%, sys=2.64%, ctx=10, majf=0, minf=0 00:35:33.470 IO depths : 1=0.1%, 2=0.1%, 4=70.7%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.470 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.470 issued rwts: total=11227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:33.471 filename0: (groupid=0, jobs=1): err= 0: pid=3446991: Sun Oct 13 17:45:41 2024 00:35:33.471 read: IOPS=2123, BW=16.6MiB/s (17.4MB/s)(83.0MiB/5001msec) 00:35:33.471 slat (nsec): min=5335, max=24242, avg=6066.67, stdev=1715.73 00:35:33.471 clat (usec): min=695, max=6747, avg=3750.78, stdev=596.23 00:35:33.471 lat (usec): min=701, max=6771, avg=3756.85, stdev=596.18 00:35:33.471 clat percentiles (usec): 00:35:33.471 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3425], 00:35:33.471 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:35:33.471 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 5014], 95.00th=[ 5211], 00:35:33.471 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 5997], 99.95th=[ 6063], 00:35:33.471 | 99.99th=[ 6718] 00:35:33.471 bw ( KiB/s): min=16528, max=17312, per=24.25%, avg=17006.22, stdev=298.52, samples=9 00:35:33.471 iops : min= 2066, max= 2164, avg=2125.78, stdev=37.32, samples=9 00:35:33.471 lat (usec) : 750=0.01% 00:35:33.471 lat (msec) : 2=0.10%, 4=87.24%, 10=12.64% 00:35:33.471 cpu : usr=97.18%, sys=2.60%, ctx=7, majf=0, minf=9 00:35:33.471 IO depths : 1=0.1%, 2=0.1%, 4=71.3%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.471 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.471 issued rwts: total=10621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:33.471 filename1: (groupid=0, jobs=1): err= 0: pid=3446992: Sun Oct 13 17:45:41 2024 00:35:33.471 read: IOPS=2234, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5001msec) 00:35:33.471 slat (nsec): min=7775, max=52687, avg=8826.57, stdev=1956.48 00:35:33.471 clat (usec): min=2067, max=6584, avg=3560.51, stdev=280.48 00:35:33.471 lat (usec): min=2083, max=6619, avg=3569.34, stdev=280.58 00:35:33.471 clat percentiles (usec): 00:35:33.471 | 1.00th=[ 2868], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3392], 00:35:33.471 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3589], 60.00th=[ 3621], 00:35:33.471 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3916], 95.00th=[ 3982], 00:35:33.471 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5473], 00:35:33.471 | 99.99th=[ 5538] 00:35:33.471 bw ( KiB/s): min=17715, max=18080, per=25.49%, avg=17879.44, stdev=112.62, samples=9 00:35:33.471 iops : min= 2214, max= 2260, avg=2234.89, stdev=14.15, samples=9 00:35:33.471 lat (msec) : 4=96.64%, 10=3.36% 00:35:33.471 cpu : usr=96.80%, sys=2.96%, ctx=9, majf=0, minf=0 00:35:33.471 IO depths : 1=0.1%, 2=0.1%, 4=65.1%, 8=34.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.471 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.471 issued rwts: total=11175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:33.471 filename1: (groupid=0, jobs=1): err= 0: pid=3446993: Sun Oct 13 17:45:41 2024 00:35:33.471 read: IOPS=2167, BW=16.9MiB/s (17.8MB/s)(84.7MiB/5001msec) 00:35:33.471 slat (nsec): min=5334, max=41011, avg=5803.88, stdev=1222.08 00:35:33.471 clat (usec): min=1409, max=6147, avg=3673.77, stdev=493.85 00:35:33.471 lat (usec): min=1415, max=6153, avg=3679.57, stdev=493.82 00:35:33.471 clat percentiles (usec): 00:35:33.471 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:35:33.471 | 30.00th=[ 3458], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:35:33.471 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3982], 95.00th=[ 5080], 00:35:33.471 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 5997], 99.95th=[ 6128], 00:35:33.471 | 99.99th=[ 6128] 00:35:33.471 bw ( KiB/s): min=17120, max=17872, per=24.72%, avg=17340.44, stdev=237.02, samples=9 00:35:33.471 iops : min= 2140, max= 2234, avg=2167.56, stdev=29.63, samples=9 00:35:33.471 lat (msec) : 2=0.07%, 4=90.85%, 10=9.08% 00:35:33.471 cpu : usr=96.54%, sys=3.22%, ctx=6, majf=0, minf=2 00:35:33.471 IO depths : 1=0.1%, 2=0.1%, 4=73.9%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.471 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.471 issued rwts: total=10841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:33.471 00:35:33.471 Run status group 0 (all jobs): 00:35:33.471 READ: bw=68.5MiB/s (71.8MB/s), 16.6MiB/s-17.5MiB/s (17.4MB/s-18.4MB/s), io=343MiB (359MB), run=5001-5003msec 00:35:33.471 17:45:41 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:33.471 17:45:41 -- target/dif.sh@43 -- # local sub 00:35:33.471 17:45:41 -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.471 17:45:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:33.471 17:45:41 -- target/dif.sh@36 -- # local sub_id=0 00:35:33.471 17:45:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.471 17:45:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:33.471 17:45:41 -- target/dif.sh@36 -- # local sub_id=1 00:35:33.471 17:45:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 00:35:33.471 real 0m24.298s 00:35:33.471 user 5m20.088s 00:35:33.471 sys 0m4.401s 00:35:33.471 17:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 ************************************ 00:35:33.471 END TEST fio_dif_rand_params 00:35:33.471 ************************************ 00:35:33.471 17:45:41 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:33.471 17:45:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:33.471 17:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 ************************************ 00:35:33.471 START TEST fio_dif_digest 00:35:33.471 ************************************ 00:35:33.471 17:45:41 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:35:33.471 17:45:41 -- target/dif.sh@123 -- # local NULL_DIF 00:35:33.471 17:45:41 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:33.471 17:45:41 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:33.471 17:45:41 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:33.471 17:45:41 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:33.471 17:45:41 -- target/dif.sh@127 -- # numjobs=3 00:35:33.471 17:45:41 -- target/dif.sh@127 -- # iodepth=3 00:35:33.471 17:45:41 -- target/dif.sh@127 -- # runtime=10 00:35:33.471 17:45:41 -- target/dif.sh@128 -- # hdgst=true 00:35:33.471 17:45:41 -- target/dif.sh@128 -- # ddgst=true 00:35:33.471 17:45:41 -- target/dif.sh@130 -- # create_subsystems 0 00:35:33.471 17:45:41 -- target/dif.sh@28 -- # local sub 00:35:33.471 17:45:41 -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.471 17:45:41 -- target/dif.sh@31 -- # create_subsystem 0 00:35:33.471 17:45:41 -- target/dif.sh@18 -- # local sub_id=0 00:35:33.471 17:45:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 bdev_null0 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.471 17:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.471 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:35:33.471 [2024-10-13 17:45:41.890895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.471 17:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.471 17:45:41 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:33.471 17:45:41 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:33.471 17:45:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:33.471 17:45:41 -- nvmf/common.sh@520 -- # config=() 00:35:33.471 17:45:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.471 17:45:41 -- nvmf/common.sh@520 -- # local subsystem config 00:35:33.471 17:45:41 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.471 17:45:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:33.471 17:45:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:33.471 { 00:35:33.471 "params": { 00:35:33.471 "name": "Nvme$subsystem", 00:35:33.471 "trtype": "$TEST_TRANSPORT", 00:35:33.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.471 "adrfam": "ipv4", 00:35:33.471 "trsvcid": "$NVMF_PORT", 00:35:33.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.471 "hdgst": ${hdgst:-false}, 00:35:33.471 "ddgst": ${ddgst:-false} 00:35:33.471 }, 00:35:33.471 "method": "bdev_nvme_attach_controller" 00:35:33.471 } 00:35:33.471 EOF 00:35:33.471 )") 00:35:33.471 17:45:41 -- target/dif.sh@82 -- # gen_fio_conf 00:35:33.471 17:45:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:33.471 17:45:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:33.471 17:45:41 -- target/dif.sh@54 -- # local file 00:35:33.471 17:45:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:33.471 17:45:41 -- target/dif.sh@56 -- # cat 00:35:33.472 17:45:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.472 17:45:41 -- common/autotest_common.sh@1320 -- # shift 00:35:33.472 17:45:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:33.472 17:45:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.472 17:45:41 -- nvmf/common.sh@542 -- # cat 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.472 17:45:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:33.472 17:45:41 -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:33.472 17:45:41 -- nvmf/common.sh@544 -- # jq . 00:35:33.472 17:45:41 -- nvmf/common.sh@545 -- # IFS=, 00:35:33.472 17:45:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:33.472 "params": { 00:35:33.472 "name": "Nvme0", 00:35:33.472 "trtype": "tcp", 00:35:33.472 "traddr": "10.0.0.2", 00:35:33.472 "adrfam": "ipv4", 00:35:33.472 "trsvcid": "4420", 00:35:33.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:33.472 "hdgst": true, 00:35:33.472 "ddgst": true 00:35:33.472 }, 00:35:33.472 "method": "bdev_nvme_attach_controller" 00:35:33.472 }' 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:33.472 17:45:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:33.472 17:45:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:33.472 17:45:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:33.748 17:45:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:33.748 17:45:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:33.749 17:45:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:33.749 17:45:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.009 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:34.009 ... 00:35:34.009 fio-3.35 00:35:34.009 Starting 3 threads 00:35:34.009 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.268 [2024-10-13 17:45:42.761321] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:34.268 [2024-10-13 17:45:42.761359] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:46.506 00:35:46.506 filename0: (groupid=0, jobs=1): err= 0: pid=3448219: Sun Oct 13 17:45:52 2024 00:35:46.506 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(291MiB/10050msec) 00:35:46.506 slat (nsec): min=5750, max=87432, avg=7252.66, stdev=2250.67 00:35:46.506 clat (usec): min=7856, max=53316, avg=12917.69, stdev=1571.89 00:35:46.506 lat (usec): min=7864, max=53322, avg=12924.94, stdev=1571.77 00:35:46.506 clat percentiles (usec): 00:35:46.506 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:35:46.507 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:35:46.507 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:35:46.507 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16909], 99.95th=[49546], 00:35:46.507 | 99.99th=[53216] 00:35:46.507 bw ( KiB/s): min=28672, max=32256, per=34.63%, avg=29785.60, stdev=748.78, samples=20 00:35:46.507 iops : min= 224, max= 252, avg=232.70, stdev= 5.85, samples=20 00:35:46.507 lat (msec) : 10=1.59%, 20=98.33%, 50=0.04%, 100=0.04% 00:35:46.507 cpu : usr=94.92%, sys=4.82%, ctx=30, majf=0, minf=144 00:35:46.507 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.507 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:46.507 filename0: (groupid=0, jobs=1): err= 0: pid=3448220: Sun Oct 13 17:45:52 2024 00:35:46.507 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(283MiB/10008msec) 00:35:46.507 slat (nsec): min=5731, max=35589, avg=7176.59, stdev=1498.54 00:35:46.507 clat (usec): min=8477, max=55807, avg=13275.13, stdev=1911.25 00:35:46.507 lat (usec): min=8486, max=55813, avg=13282.31, stdev=1911.26 00:35:46.507 clat percentiles (usec): 00:35:46.507 | 1.00th=[10290], 5.00th=[11338], 10.00th=[11863], 20.00th=[12387], 00:35:46.507 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:35:46.507 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:35:46.507 | 99.00th=[15926], 99.50th=[16319], 99.90th=[55313], 99.95th=[55313], 00:35:46.507 | 99.99th=[55837] 00:35:46.507 bw ( KiB/s): min=26368, max=31232, per=33.60%, avg=28902.40, stdev=870.72, samples=20 00:35:46.507 iops : min= 206, max= 244, avg=225.80, stdev= 6.80, samples=20 00:35:46.507 lat (msec) : 10=0.71%, 20=99.12%, 50=0.04%, 100=0.13% 00:35:46.507 cpu : usr=94.59%, sys=5.15%, ctx=19, majf=0, minf=156 00:35:46.507 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.507 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:46.507 filename0: (groupid=0, jobs=1): err= 0: pid=3448221: Sun Oct 13 17:45:52 2024 00:35:46.507 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(271MiB/10048msec) 00:35:46.507 slat (nsec): min=5583, max=83949, avg=6771.80, stdev=2020.01 00:35:46.507 clat (usec): min=8047, max=94650, avg=13900.31, stdev=3522.67 00:35:46.507 lat (usec): min=8054, max=94656, avg=13907.08, stdev=3522.66 00:35:46.507 clat percentiles (usec): 00:35:46.507 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:35:46.507 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:35:46.507 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15664], 00:35:46.507 | 99.00th=[16712], 99.50th=[21365], 99.90th=[54789], 99.95th=[92799], 00:35:46.507 | 99.99th=[94897] 00:35:46.507 bw ( KiB/s): min=21504, max=28928, per=32.18%, avg=27676.30, stdev=1575.22, samples=20 00:35:46.507 iops : min= 168, max= 226, avg=216.20, stdev=12.31, samples=20 00:35:46.507 lat (msec) : 10=0.60%, 20=98.84%, 50=0.18%, 100=0.37% 00:35:46.507 cpu : usr=94.84%, sys=4.91%, ctx=23, majf=0, minf=166 00:35:46.507 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.507 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:46.507 00:35:46.507 Run status group 0 (all jobs): 00:35:46.507 READ: bw=84.0MiB/s (88.1MB/s), 26.9MiB/s-29.0MiB/s (28.2MB/s-30.4MB/s), io=844MiB (885MB), run=10008-10050msec 00:35:46.507 17:45:53 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:46.507 17:45:53 -- target/dif.sh@43 -- # local sub 00:35:46.507 17:45:53 -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.507 17:45:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:46.507 17:45:53 -- target/dif.sh@36 -- # local sub_id=0 00:35:46.507 17:45:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:46.507 17:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.507 17:45:53 -- common/autotest_common.sh@10 -- # set +x 00:35:46.507 17:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.507 17:45:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:46.507 17:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.507 17:45:53 -- common/autotest_common.sh@10 -- # set +x 00:35:46.507 17:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.507 00:35:46.507 real 0m11.241s 00:35:46.507 user 0m42.915s 00:35:46.507 sys 0m1.823s 00:35:46.507 17:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:46.507 17:45:53 -- common/autotest_common.sh@10 -- # set +x 00:35:46.507 ************************************ 00:35:46.507 END TEST fio_dif_digest 00:35:46.507 ************************************ 00:35:46.507 17:45:53 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:46.507 17:45:53 -- target/dif.sh@147 -- # nvmftestfini 00:35:46.507 17:45:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:46.507 17:45:53 -- nvmf/common.sh@116 -- # sync 00:35:46.507 17:45:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:46.507 17:45:53 -- nvmf/common.sh@119 -- # set +e 00:35:46.507 17:45:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:46.507 17:45:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:46.507 rmmod nvme_tcp 00:35:46.507 rmmod nvme_fabrics 00:35:46.507 rmmod nvme_keyring 00:35:46.507 17:45:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:46.507 17:45:53 -- nvmf/common.sh@123 -- # set -e 00:35:46.507 17:45:53 -- nvmf/common.sh@124 -- # return 0 00:35:46.507 17:45:53 -- nvmf/common.sh@477 -- # '[' -n 3437399 ']' 00:35:46.507 17:45:53 -- nvmf/common.sh@478 -- # killprocess 3437399 00:35:46.507 17:45:53 -- common/autotest_common.sh@926 -- # '[' -z 3437399 ']' 00:35:46.507 17:45:53 -- common/autotest_common.sh@930 -- # kill -0 3437399 00:35:46.507 17:45:53 -- common/autotest_common.sh@931 -- # uname 00:35:46.507 17:45:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:46.507 17:45:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3437399 00:35:46.507 17:45:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:46.507 17:45:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:46.507 17:45:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3437399' 00:35:46.507 killing process with pid 3437399 00:35:46.507 17:45:53 -- common/autotest_common.sh@945 -- # kill 3437399 00:35:46.507 17:45:53 -- common/autotest_common.sh@950 -- # wait 3437399 00:35:46.507 17:45:53 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:46.507 17:45:53 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:48.421 Waiting for block devices as requested 00:35:48.421 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:48.682 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:48.682 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:48.682 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:48.942 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:48.942 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:48.942 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:49.203 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:49.203 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:49.464 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:49.464 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:49.464 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:49.464 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:49.724 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:49.724 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:49.724 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:49.724 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:49.985 17:45:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:49.985 17:45:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:49.985 17:45:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:50.245 17:45:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:50.245 17:45:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.245 17:45:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:50.245 17:45:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.157 17:46:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:52.157 00:35:52.157 real 1m17.925s 00:35:52.157 user 8m2.638s 00:35:52.157 sys 0m21.519s 00:35:52.157 17:46:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:52.157 17:46:00 -- common/autotest_common.sh@10 -- # set +x 00:35:52.157 ************************************ 00:35:52.157 END TEST nvmf_dif 00:35:52.157 ************************************ 00:35:52.157 17:46:00 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:52.157 17:46:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:52.157 17:46:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:52.157 17:46:00 -- common/autotest_common.sh@10 -- # set +x 00:35:52.157 ************************************ 00:35:52.157 START TEST nvmf_abort_qd_sizes 00:35:52.157 ************************************ 00:35:52.157 17:46:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:52.418 * Looking for test storage... 00:35:52.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.418 17:46:00 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.418 17:46:00 -- nvmf/common.sh@7 -- # uname -s 00:35:52.418 17:46:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.418 17:46:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.418 17:46:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.418 17:46:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.418 17:46:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.418 17:46:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.418 17:46:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.418 17:46:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.418 17:46:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.418 17:46:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.418 17:46:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:52.418 17:46:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:52.418 17:46:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.418 17:46:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.418 17:46:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.418 17:46:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.418 17:46:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.418 17:46:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.418 17:46:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.418 17:46:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.418 17:46:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.418 17:46:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.418 17:46:00 -- paths/export.sh@5 -- # export PATH 00:35:52.418 17:46:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.418 17:46:00 -- nvmf/common.sh@46 -- # : 0 00:35:52.418 17:46:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:52.418 17:46:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:52.418 17:46:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:52.418 17:46:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.418 17:46:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.418 17:46:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:52.418 17:46:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:52.418 17:46:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:52.418 17:46:00 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:52.418 17:46:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:52.418 17:46:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.418 17:46:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:52.418 17:46:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:52.418 17:46:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:52.418 17:46:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.418 17:46:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:52.418 17:46:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.418 17:46:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:35:52.418 17:46:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:52.418 17:46:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:52.418 17:46:00 -- common/autotest_common.sh@10 -- # set +x 00:36:00.606 17:46:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:36:00.606 17:46:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:36:00.606 17:46:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:36:00.606 17:46:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:36:00.606 17:46:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:36:00.606 17:46:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:36:00.606 17:46:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:36:00.606 17:46:07 -- nvmf/common.sh@294 -- # net_devs=() 00:36:00.606 17:46:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:36:00.606 17:46:07 -- nvmf/common.sh@295 -- # e810=() 00:36:00.606 17:46:07 -- nvmf/common.sh@295 -- # local -ga e810 00:36:00.606 17:46:07 -- nvmf/common.sh@296 -- # x722=() 00:36:00.606 17:46:07 -- nvmf/common.sh@296 -- # local -ga x722 00:36:00.606 17:46:07 -- nvmf/common.sh@297 -- # mlx=() 00:36:00.606 17:46:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:36:00.606 17:46:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.606 17:46:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.606 17:46:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.606 17:46:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.606 17:46:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.606 17:46:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.606 17:46:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.607 17:46:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.607 17:46:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.607 17:46:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.607 17:46:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.607 17:46:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:36:00.607 17:46:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:36:00.607 17:46:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:36:00.607 17:46:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:36:00.607 17:46:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:00.607 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:00.607 17:46:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:36:00.607 17:46:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:00.607 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:00.607 17:46:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:36:00.607 17:46:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:36:00.607 17:46:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.607 17:46:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:36:00.607 17:46:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.607 17:46:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:00.607 Found net devices under 0000:31:00.0: cvl_0_0 00:36:00.607 17:46:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.607 17:46:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:36:00.607 17:46:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.607 17:46:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:36:00.607 17:46:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.607 17:46:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:00.607 Found net devices under 0000:31:00.1: cvl_0_1 00:36:00.607 17:46:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.607 17:46:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:36:00.607 17:46:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:36:00.607 17:46:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:36:00.607 17:46:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:36:00.607 17:46:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:00.607 17:46:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:00.607 17:46:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:00.607 17:46:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:36:00.607 17:46:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:00.607 17:46:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:00.607 17:46:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:36:00.607 17:46:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:00.607 17:46:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:00.607 17:46:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:36:00.607 17:46:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:36:00.607 17:46:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:36:00.607 17:46:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:00.607 17:46:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:00.607 17:46:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:00.607 17:46:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:36:00.607 17:46:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:00.607 17:46:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:00.607 17:46:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:00.607 17:46:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:36:00.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:00.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:36:00.607 00:36:00.607 --- 10.0.0.2 ping statistics --- 00:36:00.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.607 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:36:00.607 17:46:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:00.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:00.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:36:00.607 00:36:00.607 --- 10.0.0.1 ping statistics --- 00:36:00.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.607 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:36:00.607 17:46:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:00.607 17:46:08 -- nvmf/common.sh@410 -- # return 0 00:36:00.607 17:46:08 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:36:00.607 17:46:08 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:03.155 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:03.155 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:03.155 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:03.155 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:03.155 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:03.155 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:03.155 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:03.417 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:03.679 17:46:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.679 17:46:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:03.679 17:46:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:03.679 17:46:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.679 17:46:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:03.679 17:46:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:03.941 17:46:12 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:36:03.941 17:46:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:03.941 17:46:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:03.941 17:46:12 -- common/autotest_common.sh@10 -- # set +x 00:36:03.941 17:46:12 -- nvmf/common.sh@469 -- # nvmfpid=3457903 00:36:03.941 17:46:12 -- nvmf/common.sh@470 -- # waitforlisten 3457903 00:36:03.941 17:46:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:03.941 17:46:12 -- common/autotest_common.sh@819 -- # '[' -z 3457903 ']' 00:36:03.941 17:46:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.941 17:46:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:03.941 17:46:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.941 17:46:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:03.941 17:46:12 -- common/autotest_common.sh@10 -- # set +x 00:36:03.941 [2024-10-13 17:46:12.260168] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:36:03.941 [2024-10-13 17:46:12.260212] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.941 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.941 [2024-10-13 17:46:12.326292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.941 [2024-10-13 17:46:12.357172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:03.941 [2024-10-13 17:46:12.357301] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.941 [2024-10-13 17:46:12.357311] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.941 [2024-10-13 17:46:12.357320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.941 [2024-10-13 17:46:12.357456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.941 [2024-10-13 17:46:12.357560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.941 [2024-10-13 17:46:12.357721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.941 [2024-10-13 17:46:12.357722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.884 17:46:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:04.884 17:46:13 -- common/autotest_common.sh@852 -- # return 0 00:36:04.884 17:46:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:04.884 17:46:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:04.884 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:04.884 17:46:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.884 17:46:13 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:04.884 17:46:13 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:36:04.885 17:46:13 -- scripts/common.sh@311 -- # local bdf bdfs 00:36:04.885 17:46:13 -- scripts/common.sh@312 -- # local nvmes 00:36:04.885 17:46:13 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:36:04.885 17:46:13 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:04.885 17:46:13 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:36:04.885 17:46:13 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:04.885 17:46:13 -- scripts/common.sh@322 -- # uname -s 00:36:04.885 17:46:13 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:36:04.885 17:46:13 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:36:04.885 17:46:13 -- scripts/common.sh@327 -- # (( 1 )) 00:36:04.885 17:46:13 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:36:04.885 17:46:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:04.885 17:46:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:04.885 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:04.885 ************************************ 00:36:04.885 START TEST spdk_target_abort 00:36:04.885 ************************************ 00:36:04.885 17:46:13 -- common/autotest_common.sh@1104 -- # spdk_target 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:04.885 17:46:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:04.885 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:04.885 spdk_targetn1 00:36:04.885 17:46:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:04.885 17:46:13 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:04.885 17:46:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:04.885 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:05.146 [2024-10-13 17:46:13.412025] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.146 17:46:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:36:05.146 17:46:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.146 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:05.146 17:46:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:36:05.146 17:46:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.146 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:05.146 17:46:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:36:05.146 17:46:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.146 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:36:05.146 [2024-10-13 17:46:13.452332] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.146 17:46:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:05.146 17:46:13 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:05.147 17:46:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:05.147 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.409 [2024-10-13 17:46:13.701983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:688 len:8 PRP1 0x2000078be000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.702006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.709540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:920 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0076 p:1 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.725545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1584 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c9 p:1 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.733465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1880 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.733480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00ec p:1 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.733719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1896 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.733729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.757489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2744 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.757505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.774716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3464 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.774732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.775302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3512 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.775313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b9 p:0 m:0 dnr:0 00:36:05.409 [2024-10-13 17:46:13.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3648 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:36:05.409 [2024-10-13 17:46:13.781532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00cb p:0 m:0 dnr:0 00:36:08.711 Initializing NVMe Controllers 00:36:08.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:36:08.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:36:08.711 Initialization complete. Launching workers. 00:36:08.711 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12814, failed: 9 00:36:08.711 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3255, failed to submit 9568 00:36:08.711 success 749, unsuccess 2506, failed 0 00:36:08.711 17:46:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:08.711 17:46:16 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:08.711 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.711 [2024-10-13 17:46:16.897185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:656 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:36:08.711 [2024-10-13 17:46:16.897222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:36:08.711 [2024-10-13 17:46:16.903436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:824 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:36:08.711 [2024-10-13 17:46:16.903457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:36:08.711 [2024-10-13 17:46:16.970169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2344 len:8 PRP1 0x200007c44000 PRP2 0x0 00:36:08.711 [2024-10-13 17:46:16.970194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:08.711 [2024-10-13 17:46:17.184306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:7256 len:8 PRP1 0x200007c48000 PRP2 0x0 00:36:08.711 [2024-10-13 17:46:17.184334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:008c p:0 m:0 dnr:0 00:36:12.009 Initializing NVMe Controllers 00:36:12.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:36:12.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:36:12.009 Initialization complete. Launching workers. 00:36:12.009 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8584, failed: 4 00:36:12.009 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1194, failed to submit 7394 00:36:12.009 success 361, unsuccess 833, failed 0 00:36:12.009 17:46:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.009 17:46:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:12.009 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.308 Initializing NVMe Controllers 00:36:15.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:36:15.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:36:15.308 Initialization complete. Launching workers. 00:36:15.308 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43875, failed: 0 00:36:15.308 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2598, failed to submit 41277 00:36:15.308 success 602, unsuccess 1996, failed 0 00:36:15.309 17:46:23 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:36:15.309 17:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:15.309 17:46:23 -- common/autotest_common.sh@10 -- # set +x 00:36:15.309 17:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:15.309 17:46:23 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:15.309 17:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:15.309 17:46:23 -- common/autotest_common.sh@10 -- # set +x 00:36:16.693 17:46:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:16.693 17:46:25 -- target/abort_qd_sizes.sh@62 -- # killprocess 3457903 00:36:16.693 17:46:25 -- common/autotest_common.sh@926 -- # '[' -z 3457903 ']' 00:36:16.693 17:46:25 -- common/autotest_common.sh@930 -- # kill -0 3457903 00:36:16.693 17:46:25 -- common/autotest_common.sh@931 -- # uname 00:36:16.693 17:46:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:16.693 17:46:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3457903 00:36:16.693 17:46:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:16.693 17:46:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:16.693 17:46:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3457903' 00:36:16.693 killing process with pid 3457903 00:36:16.693 17:46:25 -- common/autotest_common.sh@945 -- # kill 3457903 00:36:16.693 17:46:25 -- common/autotest_common.sh@950 -- # wait 3457903 00:36:16.693 00:36:16.693 real 0m12.081s 00:36:16.693 user 0m49.313s 00:36:16.693 sys 0m1.748s 00:36:16.693 17:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:16.693 17:46:25 -- common/autotest_common.sh@10 -- # set +x 00:36:16.693 ************************************ 00:36:16.693 END TEST spdk_target_abort 00:36:16.693 ************************************ 00:36:16.954 17:46:25 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:36:16.954 17:46:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:16.954 17:46:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:16.954 17:46:25 -- common/autotest_common.sh@10 -- # set +x 00:36:16.954 ************************************ 00:36:16.954 START TEST kernel_target_abort 00:36:16.954 ************************************ 00:36:16.954 17:46:25 -- common/autotest_common.sh@1104 -- # kernel_target 00:36:16.954 17:46:25 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:36:16.954 17:46:25 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:36:16.954 17:46:25 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:36:16.954 17:46:25 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:36:16.954 17:46:25 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:36:16.954 17:46:25 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:16.954 17:46:25 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:16.954 17:46:25 -- nvmf/common.sh@627 -- # local block nvme 00:36:16.954 17:46:25 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:36:16.954 17:46:25 -- nvmf/common.sh@630 -- # modprobe nvmet 00:36:16.954 17:46:25 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:16.954 17:46:25 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:20.260 Waiting for block devices as requested 00:36:20.521 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:20.521 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:20.521 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:20.521 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:20.782 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:20.782 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:20.782 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:21.043 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:21.043 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:21.304 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:21.304 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:21.304 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:21.564 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:21.564 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:21.564 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:21.564 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:21.824 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:22.086 17:46:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:36:22.086 17:46:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:22.086 17:46:30 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:36:22.086 17:46:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:36:22.086 17:46:30 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:22.086 No valid GPT data, bailing 00:36:22.086 17:46:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:22.086 17:46:30 -- scripts/common.sh@393 -- # pt= 00:36:22.086 17:46:30 -- scripts/common.sh@394 -- # return 1 00:36:22.086 17:46:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:36:22.086 17:46:30 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:36:22.086 17:46:30 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:22.086 17:46:30 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:22.086 17:46:30 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:22.086 17:46:30 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:36:22.086 17:46:30 -- nvmf/common.sh@654 -- # echo 1 00:36:22.086 17:46:30 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:36:22.086 17:46:30 -- nvmf/common.sh@656 -- # echo 1 00:36:22.086 17:46:30 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:36:22.086 17:46:30 -- nvmf/common.sh@663 -- # echo tcp 00:36:22.086 17:46:30 -- nvmf/common.sh@664 -- # echo 4420 00:36:22.086 17:46:30 -- nvmf/common.sh@665 -- # echo ipv4 00:36:22.086 17:46:30 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:22.086 17:46:30 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:22.086 00:36:22.086 Discovery Log Number of Records 2, Generation counter 2 00:36:22.086 =====Discovery Log Entry 0====== 00:36:22.086 trtype: tcp 00:36:22.086 adrfam: ipv4 00:36:22.086 subtype: current discovery subsystem 00:36:22.086 treq: not specified, sq flow control disable supported 00:36:22.086 portid: 1 00:36:22.086 trsvcid: 4420 00:36:22.086 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:22.086 traddr: 10.0.0.1 00:36:22.086 eflags: none 00:36:22.086 sectype: none 00:36:22.086 =====Discovery Log Entry 1====== 00:36:22.086 trtype: tcp 00:36:22.086 adrfam: ipv4 00:36:22.086 subtype: nvme subsystem 00:36:22.086 treq: not specified, sq flow control disable supported 00:36:22.086 portid: 1 00:36:22.086 trsvcid: 4420 00:36:22.086 subnqn: kernel_target 00:36:22.086 traddr: 10.0.0.1 00:36:22.086 eflags: none 00:36:22.086 sectype: none 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:22.086 17:46:30 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:22.347 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.648 Initializing NVMe Controllers 00:36:25.648 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:25.648 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:25.648 Initialization complete. Launching workers. 00:36:25.648 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68692, failed: 0 00:36:25.648 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 68692, failed to submit 0 00:36:25.648 success 0, unsuccess 68692, failed 0 00:36:25.648 17:46:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.648 17:46:33 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:25.648 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.947 Initializing NVMe Controllers 00:36:28.947 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:28.947 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:28.947 Initialization complete. Launching workers. 00:36:28.947 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 110759, failed: 0 00:36:28.947 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27894, failed to submit 82865 00:36:28.947 success 0, unsuccess 27894, failed 0 00:36:28.947 17:46:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:28.947 17:46:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:28.947 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.491 Initializing NVMe Controllers 00:36:31.491 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:31.491 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:31.491 Initialization complete. Launching workers. 00:36:31.491 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 105953, failed: 0 00:36:31.491 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26502, failed to submit 79451 00:36:31.491 success 0, unsuccess 26502, failed 0 00:36:31.491 17:46:39 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:36:31.491 17:46:39 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:36:31.491 17:46:39 -- nvmf/common.sh@677 -- # echo 0 00:36:31.491 17:46:39 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:36:31.491 17:46:39 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:31.491 17:46:39 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:31.491 17:46:39 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:31.491 17:46:39 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:36:31.491 17:46:39 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:36:31.491 00:36:31.491 real 0m14.686s 00:36:31.491 user 0m8.465s 00:36:31.491 sys 0m3.699s 00:36:31.491 17:46:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:31.491 17:46:39 -- common/autotest_common.sh@10 -- # set +x 00:36:31.491 ************************************ 00:36:31.491 END TEST kernel_target_abort 00:36:31.491 ************************************ 00:36:31.491 17:46:39 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:36:31.491 17:46:39 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:36:31.491 17:46:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:31.491 17:46:39 -- nvmf/common.sh@116 -- # sync 00:36:31.491 17:46:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:31.491 17:46:39 -- nvmf/common.sh@119 -- # set +e 00:36:31.491 17:46:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:31.491 17:46:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:31.491 rmmod nvme_tcp 00:36:31.491 rmmod nvme_fabrics 00:36:31.752 rmmod nvme_keyring 00:36:31.752 17:46:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:31.752 17:46:40 -- nvmf/common.sh@123 -- # set -e 00:36:31.752 17:46:40 -- nvmf/common.sh@124 -- # return 0 00:36:31.752 17:46:40 -- nvmf/common.sh@477 -- # '[' -n 3457903 ']' 00:36:31.752 17:46:40 -- nvmf/common.sh@478 -- # killprocess 3457903 00:36:31.752 17:46:40 -- common/autotest_common.sh@926 -- # '[' -z 3457903 ']' 00:36:31.752 17:46:40 -- common/autotest_common.sh@930 -- # kill -0 3457903 00:36:31.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3457903) - No such process 00:36:31.752 17:46:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3457903 is not found' 00:36:31.752 Process with pid 3457903 is not found 00:36:31.752 17:46:40 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:36:31.752 17:46:40 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:35.963 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:65:00.0 (144d a80a): Already using the nvme driver 00:36:35.963 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:35.963 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:35.963 17:46:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:35.963 17:46:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:35.963 17:46:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:35.963 17:46:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:35.963 17:46:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.963 17:46:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.963 17:46:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.947 17:46:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:37.947 00:36:37.947 real 0m45.796s 00:36:37.947 user 1m3.103s 00:36:37.947 sys 0m16.623s 00:36:37.947 17:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:37.947 17:46:46 -- common/autotest_common.sh@10 -- # set +x 00:36:37.947 ************************************ 00:36:37.947 END TEST nvmf_abort_qd_sizes 00:36:37.947 ************************************ 00:36:38.275 17:46:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:38.275 17:46:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:38.275 17:46:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:38.275 17:46:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:38.275 17:46:46 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:38.275 17:46:46 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:38.275 17:46:46 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:38.275 17:46:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:38.275 17:46:46 -- common/autotest_common.sh@10 -- # set +x 00:36:38.275 17:46:46 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:38.275 17:46:46 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:38.275 17:46:46 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:38.275 17:46:46 -- common/autotest_common.sh@10 -- # set +x 00:36:46.417 INFO: APP EXITING 00:36:46.417 INFO: killing all VMs 00:36:46.417 INFO: killing vhost app 00:36:46.417 INFO: EXIT DONE 00:36:48.965 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:48.965 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:48.965 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:48.965 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:48.965 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:48.965 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:49.225 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:49.225 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:49.225 0000:65:00.0 (144d a80a): Already using the nvme driver 00:36:49.225 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:49.225 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:49.225 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:49.226 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:49.226 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:49.226 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:49.226 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:49.226 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:53.432 Cleaning 00:36:53.432 Removing: /var/run/dpdk/spdk0/config 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:53.432 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:53.432 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:53.432 Removing: /var/run/dpdk/spdk1/config 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:53.432 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:53.432 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:53.432 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:53.432 Removing: /var/run/dpdk/spdk2/config 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:53.432 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:53.432 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:53.432 Removing: /var/run/dpdk/spdk3/config 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:53.432 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:53.432 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:53.432 Removing: /var/run/dpdk/spdk4/config 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:53.432 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:53.432 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:53.432 Removing: /dev/shm/bdev_svc_trace.1 00:36:53.432 Removing: /dev/shm/nvmf_trace.0 00:36:53.432 Removing: /dev/shm/spdk_tgt_trace.pid2972697 00:36:53.432 Removing: /var/run/dpdk/spdk0 00:36:53.432 Removing: /var/run/dpdk/spdk1 00:36:53.432 Removing: /var/run/dpdk/spdk2 00:36:53.432 Removing: /var/run/dpdk/spdk3 00:36:53.432 Removing: /var/run/dpdk/spdk4 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2971218 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2972697 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2973343 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2974393 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2975180 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2975567 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2975881 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2976157 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2976432 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2976787 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2977103 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2977288 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2978600 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2982028 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2982278 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2982628 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2982959 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2983337 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2983430 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2983857 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2984055 00:36:53.432 Removing: /var/run/dpdk/spdk_pid2984418 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2984466 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2984799 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2984890 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2985426 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2985604 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2985994 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2986360 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2986388 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2986442 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2986778 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2987012 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2987164 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2987510 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2987846 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2988057 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2988218 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2988569 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2988899 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2989064 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2989270 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2989625 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2989961 00:36:53.433 Removing: /var/run/dpdk/spdk_pid2990083 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2990329 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2990681 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2990949 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2991094 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2991405 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2991752 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2991991 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2992145 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2992462 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2992812 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2993005 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2993189 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2993526 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2993875 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2994045 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2994251 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2994585 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2994932 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2995097 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2995306 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2995649 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2996006 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2996155 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2996372 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2996714 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2997064 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2997125 00:36:53.694 Removing: /var/run/dpdk/spdk_pid2997534 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3002061 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3104384 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3109732 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3121676 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3128146 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3132941 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3133627 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3140903 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3140905 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3141924 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3142950 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3143984 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3144744 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3144869 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3145096 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3145346 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3145423 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3146771 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3147906 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3148938 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3149616 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3149624 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3149958 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3151285 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3152493 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3162639 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3162995 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3168135 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3175032 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3178078 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3190461 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3201986 00:36:53.694 Removing: /var/run/dpdk/spdk_pid3204105 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3205229 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3225815 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3230480 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3235856 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3237785 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3240152 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3240347 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3240515 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3240858 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3241283 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3243464 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3244504 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3245097 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3252465 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3258918 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3264828 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3310210 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3315030 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3322155 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3323619 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3325429 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3330483 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3335431 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3344768 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3344770 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3350354 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3350671 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3350749 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3351373 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3351378 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3352752 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3354776 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3356645 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3358517 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3360525 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3362546 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3369983 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3370541 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3371745 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3372939 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3379209 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3382480 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3389113 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3396282 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3403154 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3403847 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3404538 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3405232 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3406299 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3406991 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3407682 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3408376 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3413506 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3413845 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3421003 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3421376 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3423897 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3431466 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3431509 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3437726 00:36:53.954 Removing: /var/run/dpdk/spdk_pid3440101 00:36:54.213 Removing: /var/run/dpdk/spdk_pid3442769 00:36:54.213 Removing: /var/run/dpdk/spdk_pid3444286 00:36:54.213 Removing: /var/run/dpdk/spdk_pid3446563 00:36:54.213 Removing: /var/run/dpdk/spdk_pid3448054 00:36:54.213 Removing: /var/run/dpdk/spdk_pid3458199 00:36:54.213 Removing: /var/run/dpdk/spdk_pid3458879 00:36:54.214 Removing: /var/run/dpdk/spdk_pid3459540 00:36:54.214 Removing: /var/run/dpdk/spdk_pid3462389 00:36:54.214 Removing: /var/run/dpdk/spdk_pid3462914 00:36:54.214 Removing: /var/run/dpdk/spdk_pid3463590 00:36:54.214 Clean 00:36:54.214 killing process with pid 2911817 00:37:04.207 killing process with pid 2911814 00:37:04.207 killing process with pid 2911816 00:37:04.468 killing process with pid 2911815 00:37:04.468 17:47:12 -- common/autotest_common.sh@1436 -- # return 0 00:37:04.468 17:47:12 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:37:04.468 17:47:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:04.468 17:47:12 -- common/autotest_common.sh@10 -- # set +x 00:37:04.468 17:47:12 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:37:04.468 17:47:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:04.468 17:47:12 -- common/autotest_common.sh@10 -- # set +x 00:37:04.728 17:47:13 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:04.728 17:47:13 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:04.728 17:47:13 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:04.728 17:47:13 -- spdk/autotest.sh@394 -- # hash lcov 00:37:04.728 17:47:13 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:04.728 17:47:13 -- spdk/autotest.sh@396 -- # hostname 00:37:04.728 17:47:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:04.728 geninfo: WARNING: invalid characters removed from testname! 00:37:31.306 17:47:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:31.306 17:47:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:32.245 17:47:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:33.626 17:47:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:35.006 17:47:43 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:36.915 17:47:44 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:38.298 17:47:46 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:38.298 17:47:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.298 17:47:46 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:38.298 17:47:46 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.298 17:47:46 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.298 17:47:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.298 17:47:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.299 17:47:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.299 17:47:46 -- paths/export.sh@5 -- $ export PATH 00:37:38.299 17:47:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.299 17:47:46 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:38.299 17:47:46 -- common/autobuild_common.sh@440 -- $ date +%s 00:37:38.299 17:47:46 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728834466.XXXXXX 00:37:38.299 17:47:46 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728834466.LCWI0u 00:37:38.299 17:47:46 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:37:38.299 17:47:46 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:37:38.299 17:47:46 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:38.299 17:47:46 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:38.299 17:47:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:38.299 17:47:46 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:38.299 17:47:46 -- common/autobuild_common.sh@456 -- $ get_config_params 00:37:38.299 17:47:46 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:38.299 17:47:46 -- common/autotest_common.sh@10 -- $ set +x 00:37:38.299 17:47:46 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:38.299 17:47:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:37:38.299 17:47:46 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:38.299 17:47:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:38.299 17:47:46 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:38.299 17:47:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:38.299 17:47:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:38.299 17:47:46 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:38.299 17:47:46 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:38.299 17:47:46 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:38.299 17:47:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:38.299 + [[ -n 2856783 ]] 00:37:38.299 + sudo kill 2856783 00:37:38.309 [Pipeline] } 00:37:38.325 [Pipeline] // stage 00:37:38.330 [Pipeline] } 00:37:38.345 [Pipeline] // timeout 00:37:38.351 [Pipeline] } 00:37:38.366 [Pipeline] // catchError 00:37:38.372 [Pipeline] } 00:37:38.387 [Pipeline] // wrap 00:37:38.393 [Pipeline] } 00:37:38.407 [Pipeline] // catchError 00:37:38.417 [Pipeline] stage 00:37:38.420 [Pipeline] { (Epilogue) 00:37:38.433 [Pipeline] catchError 00:37:38.435 [Pipeline] { 00:37:38.449 [Pipeline] echo 00:37:38.451 Cleanup processes 00:37:38.457 [Pipeline] sh 00:37:38.749 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:38.749 3480542 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:38.763 [Pipeline] sh 00:37:39.054 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:39.054 ++ grep -v 'sudo pgrep' 00:37:39.054 ++ awk '{print $1}' 00:37:39.054 + sudo kill -9 00:37:39.054 + true 00:37:39.066 [Pipeline] sh 00:37:39.354 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:51.606 [Pipeline] sh 00:37:51.895 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:51.895 Artifacts sizes are good 00:37:51.909 [Pipeline] archiveArtifacts 00:37:51.916 Archiving artifacts 00:37:52.125 [Pipeline] sh 00:37:52.471 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:52.489 [Pipeline] cleanWs 00:37:52.500 [WS-CLEANUP] Deleting project workspace... 00:37:52.500 [WS-CLEANUP] Deferred wipeout is used... 00:37:52.508 [WS-CLEANUP] done 00:37:52.510 [Pipeline] } 00:37:52.528 [Pipeline] // catchError 00:37:52.540 [Pipeline] sh 00:37:52.831 + logger -p user.info -t JENKINS-CI 00:37:52.841 [Pipeline] } 00:37:52.854 [Pipeline] // stage 00:37:52.859 [Pipeline] } 00:37:52.873 [Pipeline] // node 00:37:52.878 [Pipeline] End of Pipeline 00:37:52.919 Finished: SUCCESS